Science topic

Algorithm Design - Science topic

Explore the latest questions and answers in Algorithm Design, and find Algorithm Design experts.
Questions related to Algorithm Design
  • asked a question related to Algorithm Design
Question
2 answers
I have an image (most likely a spectrogram), may be square or rectangle, won't know until received. I need to down sample one axis, say the 'x' axis. So if a spectrogram, I will be down sampling the frequencies (the time, 'y' axis, would remain the same). I was thinking of doing a nearest neighbor of the frequency components. Any idea how can I go about this? Any suggestions would be appreciated. Thanks...
Relevant answer
Answer
Mohammad Imam , thanks for the input, I'll look into this
  • asked a question related to Algorithm Design
Question
13 answers
I would like to ask a general question: Any other physicists of any kind, what do YOU see as the fundamental flaws currently existing in mathematics-to-physics (or vice versa) calculations in a general sense? Is it differences in tensors, unknown values, inconsistent unreliable outputs with known methods, no reliable well-known methods ect? Or is the problem to you seen as more of a problem with scientific attitudes and viewpoints being limiting in their current state? And the bigger overall question: Which of these options is limiting science to a higher degree? I'd love to hear other's comments on this.
Relevant answer
Answer
André Michaud A sensible man indeed! And perhaps we share this outlook because I tend to look at physics through an engineering lens. To me, if one cannot prove their theoretical words with actual equations, it doesn't hold much of a basis at all. I do believe that mathematics and physics are intimately entertained and that physics is simply the study of emergent properties of physical mathematics. It's interesting and quite hillarious you say that you barely have seen a theoretician pick up a calculator in 25years. Unfortunately that used to be me, and you are very right that a more stagnating thing does not exist. With perspective on that I can say my theoretical physics, although thought provoking, rarely had any use, either practical or academic that anyone even wanted to see, cared about, or even could get employed off of. I started getting a lot of success when I realized that the math is fundamental and that I was deluding myself into thinking it wasn't because I wasn't good at it. To be quite honest I found I thought I "wasn't good" at typical mathematics because it was just too abstract for me. "1+1=2." Okay, one what plus one what equals Teo WHAT? Distance? Amount of friends? Weight. It was just way too theoretical for me, abstract numbers with no unit, basis or story attached. Basically unreal axiom that help us understand things. Humans kinda forgot this abstraction is simply a tool to help understand things, not an actual representation of Universal constants. It can indicate those things via representation but that's it. Once I started dealing with physics and engineering in a less theoretical sense, I realized my problem with why I thought I couldn't do math was that pure mathematics (Shoutout Doom) Was just far too abstract. I think a lot of people who are convinced math and physics aren't always intimately entertwined have this problem, a kind of innate fear and of their own competency with math that affects the logic of this assumption. I'm also relatively sure as well that looking at these things I have said here, a lot of people afraid of their mathematic ability would find they are actually very good at it when abstraction is removed. But maybe that is just me. Jixin Chen I also hate to keep referencing back to my paper (if anyone has any info on this let me know) but there are many examples in the mathematics and quantum physics sections that show how mathematical equations are proven to be able to represent absurdly complex quantum physics principles and constants of nature in a neat "package".
  • asked a question related to Algorithm Design
Question
2 answers
Dear Scholars,
I am looking for a top Journal paper on the Computational Complexity of the Proposed Algorithm. What I am interested to read in the paper that evaluates their proposed algorithm should be around finding the complexity of the proposed algorithm. Your recommendations are welcome. Thank you
Relevant answer
Answer
Top Paper on Computational Complexity of the Proposed Algorithm?
1. "Computational Complexity Analysis of a Proposed Algorithm" by Murat Kantarcioglu, Mohamed Shehab, and Sushil Jajodia.
2. "Computational Complexity Analysis of a Proposed Algorithm Using Big-O Notation" by Kim T. Nguyen and Neeraj Kumar.
3. "Computational Complexity of a Proposed Algorithm: A Case Study" by David S. Johnson and Shlomo Zilberstein.
4. "Analysis of Computational Complexity of a Proposed Algorithm" by Ling Liu and Shouhuai Xu.
5. "Computational Complexity of the Proposed Algorithm: A Performance Evaluation" by Tsung-Chih Lin, Yi-Chun Huang, and Yen-Hsiang Huang.
  • asked a question related to Algorithm Design
Question
3 answers
Hello, all:
I am looking for some Open-sourced Downscaling Algorithms or Methods applied to the High-resolution Remote Sensing Data (such as Land Cover/ Vegetation Type and so on).
Could somebody help me out? Appreciate that!
Relevant answer
Answer
Dear Chenyuan,
Here you are a dissertation about it
and on this webpage, you can find most of the algorithms you could need
Cheers,
Ivan
  • asked a question related to Algorithm Design
Question
9 answers
Hello everyone,
Could you recommend courses, papers, books or websites about algorithms that support missing values?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
  • asked a question related to Algorithm Design
Question
4 answers
Hi everyone.
I want optimization algorithm design of steel and concrete building.
Please help me
Thanks.
Relevant answer
Answer
@vadivel SM
Hi.
Im looking for two or more algorithms to optimize and reduce material of concrete and steel structure in legally way
  • asked a question related to Algorithm Design
Question
3 answers
Dear friends,
Would you please tell me where I can find a dynamic list (updated constantly) of new meta-heuristic algorithms?
Thanks in advance for your guidance,
Reza
Relevant answer
Answer
For a humorous treatment, see the "bestiary":
  • asked a question related to Algorithm Design
Question
1 answer
Fragmentation trees are a critical step towards elucidation of compounds from mass spectra by enabling high confidence de novo identification of molecular formulas of unknown compounds (doi:10.1186/s13321-016-0116-8). Unfortunately, those algorithms suffer from long computation times making analysis of large datasets intractable. Recently, however, Fertin et al. (doi:10.1016/j.tcs.2020.11.021) highlighted additional properties of fragmentation graphs which could reduce computational times. Since their work is purely theoretical and lacks an implementation, I'm looking to partner up with someone to investigate and implement faster fragmentation tree algorithms. Could end up being a nice paper. Anyone interested?
  • asked a question related to Algorithm Design
Question
4 answers
Dear Researchers,
Do researchers/universities value students/researchers having published sequences to the OEIS?
Relevant answer
Answer
Dear Marco Ripà ,
I have done both. I cited my work in the sequences and the sequences in my work.
  • asked a question related to Algorithm Design
Question
28 answers
Is it possible to use Artificial Intelligence (AI) in Biological and Medical Sciences to search databases for potential candidate drugs/genes to solve global problems without first performing animal studies?
Relevant answer
Answer
We hypothesize that future generations of Artificial Intelligence (AI) technologies specifically adapted for biological sciences will help enable the reintegration of biology.
  • asked a question related to Algorithm Design
Question
2 answers
Good work by Juan, Electromagnetic aspects of ESPAR and digitally controllable scatterers with a look at low-complexity algorithm design
Relevant answer
Answer
Dear Ohira-san,
Hopefully after the covid-19 I can visit Ohira-san. The ESPAR antenna was really a new direction created by Ohira-san. I was also very lucky to have the opportunity to investigate the ESPAR algorithms. Thank you for the latest papers. I will surely study the lasted papers of Ohira-san.
  • asked a question related to Algorithm Design
Question
6 answers
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
Mathematics helps AI scientists to solve challenging deep abstract problems using traditional methods and techniques known for hundreds of years. Math is needed for AI because computers see the world differently from humans. Where humans see an image, a computer will see a 2D- or 3D-matrix. With the help of mathematics, we can input these dimensions into a computer, and linear algebra is about processing new data sets.
Here you can find good sources for this:
  • asked a question related to Algorithm Design
Question
5 answers
I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.
Relevant answer
Answer
Teachable neural networks are currently the main direction in the development of artificial intelligence (AI). These systems belong to the category of inductive anthropomorphic, which is associated with their well-known advantages, disadvantages and areas of application. It is generally accepted that anthropomorphism is the exclusive factor in the advantage of AI, however, there are actual areas of AI application, in which the use of deductive systems is more effective, which use less data and computational resources, but are capable of learning. Examples of such systems: knowledge generators, expert systems, active marketing systems, and many others.
The basis of AI deductive imitation systems are extremely general models and algorithms that imitate the objects (objects) under consideration. The generality limit is determined both by the amount of knowledge about the object and by the calculated capabilities. General models (metamodels) are usually formed in the form of systems of equations that link together the variables that describe the object under consideration, as well as parameters that determine the form of partial equations formed when transforming the equations of the general model. The solution of such systems of equations is carried out by methods of global optimization: stochastic (Monte Carlo methods) or heuristic (evolutionary algorithms and others). Simulation AI presupposes the solution of two basic problems: inverse (implementing the system training options) - the formation of partial equations (models) according to the values ​​of certain variable characteristics of objects; direct - definitions of values ​​of variables for particular configurations of models. The use of simulation metamodels in combination with the randomization of variables and parameters provides high flexibility and adaptability of intelligent systems.
  • asked a question related to Algorithm Design
Question
33 answers
What kind of software or what kind of method could be used to manage the huge amount of paper, so that you could find whatever paper you have read fast.
Relevant answer
Answer
I suggest Mendeley and Endnote. But Mendeley is user friendly software.
  • asked a question related to Algorithm Design
Question
12 answers
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
Thanks in advance.
Relevant answer
Answer
You have to explain time and space complexity of your algorithm
  • asked a question related to Algorithm Design
Question
4 answers
If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?
Any assistance or refer to scientific papers that can help me?
Relevant answer
Answer
Dear Mostafa Elsersy,
You can look at the following data:
What are algorithms used by websites?
An algorithm refers to the formula or process used to reach a certain outcome. In terms of online marketing, an algorithm is used by search engines to rank websites on search engine results pages (SERPs).
What is SEO algorithm?
As mentioned previously, the Google algorithm uses keywords as part of the ranking factors to determine search results. The best way to rank for your specific keywords is by doing SEO. SEO is essentially a way to tell search engines that a website or web page is about a particular topic.
How do you rank a website?
Follow these suggestions to improve your search engine optimization (SEO) and watch your website rise the ranks to the top of search-engine results.
  1. Publish Relevant, Authoritative Content. ...
  2. Update Your Content Regularly. ...
  3. Metadata. ...
  4. Have a link-worthy site. ...
  5. Use alt tags.
  • asked a question related to Algorithm Design
Question
4 answers
I need a Data Sets about Container Stacking Problem for experimentation with differents algorithms
Relevant answer
Answer
I have the same problem
  • asked a question related to Algorithm Design
Question
4 answers
What if I wanted to match 2 individuals based on their Likert scores in a survey?
Example: Imagine a 3 question dating app where each respondent chooses one from the following choices in response to 3 different statements about themselves:
Strongly Disagree - Disagree - Neutral - Agree - Strongly Agree
1) I like long walks on the beach.
2) I always know where I want to eat.
3) I will be 100% faithful.
Assuming both subjects answer truthfully and that the 3 questions have equal weights, What is their % match for each question and overall? How would I calculate it for the following answers?
Example Answers:
Lucy's answers:
1) Strongly Agree
2) Strongly Disagree
3) Agree
Ricky's answers:
1) Agree
2) Strongly Agree
3) Strongly Disagree
What if I want to change the weight of each question?
Thanks!
Terry
Relevant answer
Answer
Daniel Wright and Remal Al-Gounmeein thanks for the links, I will take a look. We are matching respondents based on their likert scale (5) responses to 16 partisan political positions.
  • asked a question related to Algorithm Design
Question
4 answers
Why Particle Swarm Optimization works better for this classification problem?
Can anyone give me any strong reasons behind it?
Thanks in advance.
Relevant answer
Answer
Arash Mazidi PSO is also in various classification problems. I particularly use it for Phishing website datasets.
  • asked a question related to Algorithm Design
Question
9 answers
Let consider there is a selling factor like this:
Gender | Age | Street | Item 1 | Count 1 | Item 2 | Count 2 | ... | Item N | Count N | Total Price (Label)
Male | 22 | S1 | Milk | 2 | Bread | 5 | ... | - | - | 10 $
Female | 10 | S2 | Cofee | 1 | - | - | ... | - | - | 1 $
....
We want to predict the total price for a factor based on their buyer demographic information (like gender, age, job) and also their buying items and counts. It should be mentioned that we suppose that we don't know each item's price and also, the prices will be changed during the time (so, we although will have a date in our dataset).
Now it is the main question that how we can use this dataset that contains some transactional data (items) which their combination is not important. For example, if somebody buys item1 and item2, it is equal to other guys who buy item2 and item1. So, the values of our items columns should not have any differences for their value orders.
This dataset contains both multivariate and transactional data. My question is how can we predict the label more accurately?
Relevant answer
Answer
Hi Dr Behzad Soleimani Neysiani . I agree with Dr Qamar Ul Islam .
  • asked a question related to Algorithm Design
Question
2 answers
Hello all, is there any MATLAB code for Adaptive Data Rate for LoRaWAN in terms of secure communication?
Relevant answer
Answer
Adnan Majeed Thank you for your reply, but the problem still remains. I need an algorithm or MATLAB code related to ADR for LoRaWAN. If you have such details then please share that here. Thank you.
  • asked a question related to Algorithm Design
Question
6 answers
I was exploring federated learning algorithms and reading this paper (https://arxiv.org/pdf/1602.05629.pdf). In this paper, they have average the weights that are received from clients as attached file. In the marked part, they have considered total client samples and individual client samples. As far I have learned that federated learning has introduced to keep data on the client-side to maintain privacy. Then, how come the server will know this information? I am confused about this concept.
Any clarification?
Thanks in advance.
Relevant answer
Answer
Thanks for your input. I have their codes. They have followed the same. I have attached their code below.
  • asked a question related to Algorithm Design
Question
4 answers
how to calculate number of computations and parameters of our customized deep learnig algorithm designed with MATLAB
Relevant answer
Answer
  • asked a question related to Algorithm Design
Question
7 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Algorithm Design
Question
4 answers
I have written a review paper. The work does not contain any experimental works but it contains a very elaborate review of the antenna design techniques - starting from the analytical models of the 1980s through the simulation-based designs of the 2000s and finally the computer-aided algorithmic design - the recent trends. The paper is rejected by several journals. The reviewers suggest that I should include some experimental results implementing some of the reviewed works and compare them. This suggestion is not feasible because I am already running out of the page limit of most journals. Further, the exact implementation of most of the techniques reviewed is not possible because the papers generally don't provide a complete picture of how the work is done. There are always some missing pieces in papers.
Can anyone suggest to me any journal or upcoming book where I can publish it?
Relevant answer
  • asked a question related to Algorithm Design
Question
5 answers
if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?
Relevant answer
Answer
Please read the paper of Energy and Wiener index of Total graph over Ring Zn . In this paper I calculated the distance between two nodes.
  • asked a question related to Algorithm Design
Question
7 answers
Hello!
namely that I will implement a vehicle in node A and will then use dijkstra's algorithm to reach the shortest route. Example A is going to B, I would like to make a timer that shows me how long it takes to go from A to B. How can I implement a timer on java?
Relevant answer
Answer
i mean in which class? Wang Ting Dong
  • asked a question related to Algorithm Design
Question
2 answers
When creating & optimizing mathematical models with multivariate sensor data (i.e. 'X' matrices) to predict properties of interest (i.e. dependent variable or 'Y'), many strategies are recursively employed to reach "suitably relevant" model performance which include ::
>> preprocessing (e.g. scaling, derivatives...)
>> variable selection (e.g. penalties, optimization, distance metrics) with respect to RMSE or objective criteria
>> calibrant sampling (e.g. confidence intervals, clustering, latent space projection, optimization..)
Typically & contextually, for calibrant sampling, a top-down approach is utilized, i.e., from a set of 'N' calibrants, subsets of calibrants may be added or removed depending on the "requirement" or model performance. The assumption here is that a large number of datapoints or calibrants are available to choose from (collected a priori).
Philosophically & technically, how does the bottom-up pathfinding approach for calibrant sampling or "searching for ideal calibrants" in a design space, manifest itself? This is particularly relevant in chemical & biological domains, where experimental sampling is constrained.
E.g., Given smaller set of calibrants, how does one robustly approach the addition of new calibrants in silico to the calibrant-space to make more "suitable" models? (simulated datapoints can then be collected experimentally for addition to calibrant-space post modelling for next iteration of modelling).
:: Flow example ::
N calibrants -> build & compare models -> model iteration 1 -> addition of new calibrants (N+1) -> build & compare models -> model iteration 2 -> so on.... ->acceptable performance ~ acceptable experimental datapoints collectable -> acceptable model performance
  • asked a question related to Algorithm Design
Question
10 answers
We have an application which converts a netlist into a schematic drawing. It works fine, except the result is extremely ugly.
We would like to know if anyone can write a place-and-route algorithm that produces an aesthetically more pleasing picture, with a view to technical cooperation on the project,
(Our software is written in 'C')
Relevant answer
Answer
If I understand correctly, this will be part of a commercial product, so I planned to exchange my C program for a four week holiday in Australia, but now Olatunji A Shobande is interested in the algorithm, too, and I guess I cannot expect him to pay for my holiday in Scotland as well :-) ... so, since this is RG let's make the algorithm open source:
There are two main points:
1. Being a visual thinker myself, often I advice my students to redraw the schematics given in exercises in such a way that the height of the electrical potential is reflected by the height in the drawing, i. e. by the y value.
So, first the electrical potentials of the wires are determined or at least upper and lower limits estimated. With sources, this is straightforward; with transistors, we can apply the polarities in normal operation. With resistors in series, one can apply the voltage divider rule. Finally, in cases where only voltage ranges apply the mean value between upper and lower limit is taken. In the example, we get:
p0 = 0V, p1 = 3.7 V, p2 = 12.6 V, p3 = 8.1 V, p4 = 8.1 V, p6 = 7.5 V, p7 = 12 V, p8 = 12 V, p9 = 7.5 V, p10 = 7.5 V, p11 = 15 V
Then y values are assigned to the wires, in the order of increasing potential, the step width being determined by the maximum height of the drawn components. I chose a step width of 2, resulting in:
y0 = 0, y1 = 2, y2 = 18, y3 = 10, y4 = 12, y6 = 4, y7 = 14, y8 = 16, y9 = 6, y10 = 8, y11 = 20
This gives 11 horizontal rails with sufficient distance between adjacent rails for the placement of elements.
The extension in x dimension has to be larger than the number of components (transistors counting 2).
2. Usually, the signal path will start on the left side, and will run through the active components to the right. Therefore, placing starts with the active components. The first transistor placed is simply the first one in the netlist. This placing is triggered by a function which just scans all transistors not yet placed in the netlist. But the placing itself is done by a recursive function which determines if there are further transistors connected directly to the current transistor, and if so, where the largest number of transistors is. Then the function calls itself for one of these transistors.
In order to avoid backtracking to already completely harvested wires, one function parameter is the number of the wire connecting the current transistor to the next, and another parameter is a number which is compared to the number of adjacent transistors at the current wire. During the first call it is set to 2, during further calls it is incremented by 1. If this number equals the number of transistors then the current wire is abandoned.
In the example, the series is: Q1, Q2, Q6, Q5, Q3, Q4
Each transistor is placed on increasing x values, at the moment with step width 2. However, a considerably larger step width might be preferable in conjunction with the final x compression (see below).
If there were further clusters of transistors, the placing would continue with Q7.
The recursive function described above could be supplemented by another with the ability to "look" beyond one resistor or capacitor. In this way, the signal path could be traced even if the active components are coupled by passive ones. (end of point 2)
The placing of the passive components is done in such a way as to avoid increasing the length of wires if possible. The placing of the sources is done last.
Two steps remain which are not yet completely implemented: x compression (decreasing the distances in x dimension) and y compression (placing several rails on the same y value if they are non-overlapping).
Three linked lists are employed: A 1D one for the components and a second 1D one for the wires. The third one is 2D for the placement.
Placing and routing is not my field of expertise, so I don't know how my solution compares to existing ones. Anyway, this was a nice exercise.
  • asked a question related to Algorithm Design
Question
5 answers
Say for arguments sake I have two or more images with different degrees of blurring.
Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?
Best regards and thanks
Relevant answer
Answer
  • asked a question related to Algorithm Design
Question
5 answers
Some companies sell our private information (consumption habits, interests, health, etc.). There must be a way to protect yourself against this. An algorithm could simulate data in order to confuse companies. This may have an unwanted impact.
Relevant answer
Answer
Perhaps you can visit:
  • asked a question related to Algorithm Design
Question
4 answers
I am working on a regression task where I am trying to predict future values of a stock/resource. At the moment, my model uses a large set of lags as input features and I want to use feature selection (FS) to select only the important lags needed by the model. However, most FS algorithms seem to be based on classification models so I am wondering whether I can create a 'proxy' classifier which uses the same input data as my regression model but whose outputs are now discretized versions of the regression outputs (i.e. simplest case 0=increase in stock, 1=decrease). Would the selected features from this proxy model serve as 'good' features for the regression model or should I only use FS algorithms designed for regression? I would be most grateful for any suggestions, particularly if they are referenced from previous research papers on the topic.
Relevant answer
Answer
You do not need to change output variable values from continuous to categories (e.g. 0s or 1s). Feature selection works on regression setting too; only difference is the performance measure. For example in classification setting, prediction accuracy, precision, sensitivity, AUC, etc. can be used. For continuous response setting, you can use R square, Mean Squared Error, Root-Mean-Squared-Error, etc.
  • asked a question related to Algorithm Design
Question
14 answers
How can one evaluate a new approach to steganography the message in cover image, except the MSE and PSNR
Relevant answer
Answer
The proposed algorithm has binary codes and pixels inside an image
  • asked a question related to Algorithm Design
Question
3 answers
Dear friend,
These days, I'm trying to finish my Ph.D. in electrical engineering (control engineering). I'm specialized in Extended-Kalman fitter application, fault detection, neural network, digital transportation, digital twins and machine learning respectively. It is necessary to say that, my thesis about industry 4.0 at pipeline performance management (software). I'm enthusiastic to join a team for my postdoc. And I struggle to study at the edge of science topics for my postdoc.
Would you help me please to join in the on the team for my postdoc, study abroad or a way to join in this kind of program?
Relevant answer
Answer
Post doc is for those whose Phd is weak.
I suggest to look for some Entrepreneurship in your domains of interest
  • asked a question related to Algorithm Design
Question
10 answers
What are the characteristics of small objects and how to design an algorithm based on the characteristics ? To my knowledge, feature fusion and context learning are usually used to improve object detection. However, it is hard to why they improve the detection results. Are there some algorithms designed just for small object detection ?
Relevant answer
Answer
you could refer to this paper
  • asked a question related to Algorithm Design
Question
5 answers
It's no longer a surprise to realize a wide gap between advances in academic research and practicality in Industry. This discussion is about exploring this gap for a particular domain, which is Time-Series Forecast. The topic has had great many research advances in recent years, since researchers have identified promises offered by Deep Learning (DL) architectures for this domain. Thus, as evident in recent research gatherings, researchers are racing to perfect the DL architectures for taking over the time-series forecast problems. Nevertheless, the average industry practitioner remains reliant on traditional statistical methods for understandable reasons. Probably the biggest reason of all is the ease of interpretation (i.e. interpretability) offered by traditional methods, but many other reasons are valid as well, such as: ease of training, deployment, robustness, etc. The question is: If we were to reinvent a machine learning solution solely for industrial applicability, considering the current and future industry needs, then what attributes should this solution possess? Interpretability, Manipulability, Robustness, Self-maintainability, Inferability, online-updatability, something else?
Relevant answer
Answer
From my personal experience, Deep Learning (DL) offers NO substantial improvement in time series forecasting accuracy vs. traditional methods, such as Holt-Winters, exponential and double exponential smoothing with or without trends, Holt-Winters with seasonality and trends, autoregressive integrated moving average (ARIMA), digital filtering, etc.
Regardless of the concrete technique (including deep learning and recursive neural networks in AI), all time series forecasting methods are based on the use of the historical past data to extrapolate them into the future. The fundamental assumption of any forecasting technique (implicit or explicit) is that time series represents a stable pattern that can be identified and then extended into the future. If a pattern of the past data-points is not statistically stable, then no meaningful future prediction (forecasting) is possible regardless of the sophistication of the forecasting technique (DL or not).
It seems reasonable to assume that too ‘old’ data-points do not practically affect (correlated to) the most recent data-points, let alone the future data. The data-points that are strongly correlated to the newer ones can be used for making the forecast. The data-points that are weakly correlated to the newer ones (or not correlated at all) should not be included for forecasting; otherwise, the forecast will likely be skewed. Thus, the use of a large number of the available past data-points worth of several years of observations is not needed for making a meaningful forecast; in fact, the large number of the past data-points can be detrimental to producing a reasonably accurate forecast regardless of the used technique (deep learning or not). The problem of the optimal number of the past data points (training set size) of a time series should be the primary focus for industrial applicability of the forecasting rather than a forecasting technique (DL or not).
  • asked a question related to Algorithm Design
Question
4 answers
I am doing a study on temperature compensation study on fruit dry matter prediction using NIRS spectra. As I don't know much about matlab and mostly perform my multivariate regression using Unscrambler software, I am looking for simplified version of external parameter orthogonalization algorithm.
Relevant answer
Answer
I'm using MATLAB: PLS toolbox for analyzing my data and i have a problem using EPO preprocessing for PLS-DA.
when im using EPO i get this error which the toolbox cannot perform cross validation. can anyone help me with this situation?
  • asked a question related to Algorithm Design
Question
5 answers
What is the easiest software that can be used to design a medical algorithm flowchart?
Can you recommend any free software adaptable to be used with Macbook?
Relevant answer
Answer
i think powerpoint or word work well too
  • asked a question related to Algorithm Design
Question
4 answers
Hello,
I am interested in using Landsat 5-8 images to map snow and ice cover. I am trying to construct a time series showing how late into the year snow and ice cover lasts. I noticed that for Landsat ARD tiles obtained from USGS Earth Explorer there is a Pixel Quality Assessment band that accompanies surface reflectance products and that this PQA raster includes bit designations for pixels where snow or ice are present (bits 80 and 144 for Landsat 4/5). After reading more I have gathered that this PQA product is generated using the Fmask algorithm which was developed primarily for generating cloud masks. However, I decided to employ these products to see how they perform when generating fractional snow cover rasters.
I noticed that for some images very late into the year (May and June) the Fmask algorithm did classify many pixels as snow or ice, although after generating RGB composites and using the thermal infrared band to look at temperature, I determined that there was no snow or ice cover present in the image although it did look like some clouds were present. After reading more of the literature I found out that the Fmask algorithm has a tendency to sometimes classify cloud pixels as snow or ice, but I could not find an explanation as to why this happens. Is there a particular cloud type that the algorithm classifies as snow or ice, or is it unpredictable? Is there a better algorithm that is designed specifically for generating maps of fractional snow cover?
Thanks for you help,
Best,
Ryan Lennon
Relevant answer
Answer
can be isolated using different spectral signature between cloud cover and ice cover
  • asked a question related to Algorithm Design
Question
3 answers
By changing the resistive load value used for stand-alone operation to get better realization of Id,Iq tracking by the designed controller?
Relevant answer
Answer
Dear Muhammad Waseem,
I suggest you to see links and attached files on topic.
A Systematic Method for Designing a PR Controller and Active ...
An Improved Current Control Strategy for a Grid-Connected Inverter ...
A Current-Control Strategy for Voltage-Source Inverters in Microgrids ...
Robust Nonlinear Controller Design for Three Phase Grid Connected ...
Best regards
  • asked a question related to Algorithm Design
Question
13 answers
The constraints include both linear constraints and nonlinear constraints. The essential issue lies in to how to deal with the nonlinear constraints.
It would be better if this algorithm can transform these nonlinear constraints into the equivalent linear ones.
Relevant answer
  • asked a question related to Algorithm Design
Question
4 answers
Hello
I am trying to design a nature-inspired algorithm(heuristic algorithm) for robotic path planning. But I wonder if there are reference about designing heuristic algorithms and about the general steps or general form to design these sort of algorithms.
Thanks a lot in advance,
Relevant answer
Answer
There are several methods already published with several publishers
  • asked a question related to Algorithm Design
Question
8 answers
We are facing a true tsunami of “novel” metaheuristic. Are all of them useful? I referred you to Sorensen - Intl. Trans. in Op. Res. (2013) 1-16, DOI: 10.1111/itor.12001.
Relevant answer
Answer
there are people that thinks so. The reason is that apear every week a new algorithm which seeasm to be a copy from other only with different names
  • asked a question related to Algorithm Design
Question
6 answers
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
Relevant answer
Answer
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
  • asked a question related to Algorithm Design
Question
2 answers
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
Relevant answer
Answer
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
  • asked a question related to Algorithm Design
Question
4 answers
According to Shannon in classical information theory H(x) > f(H(x)) for an entropy H(x) over some random variable x, with an unknown function f. Is it possible that an observer that doesn’t know the function (that produces statistically random data) can take the output of the function and consider it random (entropy)? Additionally, if one were to use entropy as a measure between two states, what would that ‘measure’ be between the statistically random output and the original pool of random?
Relevant answer
Answer
@Nader Chmait
I imagine the approximate entropy notion may apply as an exercise of calculation. I am not sure the entropy and the approximate entropy reflect the same meaning.
The approximate entropy seems closer to the statistics, but Shannon entropy is closer to information content.
  • asked a question related to Algorithm Design
Question
2 answers
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
Relevant answer
Answer
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
  • asked a question related to Algorithm Design
Question
33 answers
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
Thanks in advance.
Relevant answer
Answer
There are tons of algorithms. Here a list:
DE
PSO
ABC
CMA-ES, etc
  • asked a question related to Algorithm Design
Question
3 answers
Is there a standard formula in getting the evaluation fitness in the Cuckoo Search algorithm? Or can any formula be used in evaluating the fitness?
Hope you can help me.
Relevant answer
Answer
This should be built in base on the particular reuqriments.
  • asked a question related to Algorithm Design
Question
1 answer
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.
Feel free to try it and comment your thoughts on it! I'm happy for any feedback.
Thank you
Michael
Relevant answer
Answer
nice thinking
  • asked a question related to Algorithm Design
Question
5 answers
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Relevant answer
Answer
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
  • asked a question related to Algorithm Design
Question
4 answers
i am developing my algorithm at weka tools. then need to compare my algorithm with basic collaborative filtering. there is Nearest Neighbor (also known as Collaborative Filtering) at weka? thanks 
Relevant answer
Answer
http://csci.viu.ca/~barskym/teaching/DM2012/labs/LAB5/WekaRecommender.java provides a preliminary example for Collaborative Filtering using k-nearest neighbours.
  • asked a question related to Algorithm Design
Question
8 answers
genetic algorithm
design optimization
wind turbine
Relevant answer
Answer
Sinan Salih Thank you so much i will contact you shortly if i need more clarifications...
  • asked a question related to Algorithm Design
Question
4 answers
I would like to know if there is a tool that can be used to verify the effectiveness of algorithms designed for solving games?
Relevant answer
  • asked a question related to Algorithm Design
Question
2 answers
Any recomendations would be great.
Relevant answer
Answer
SEE THIS LINK I THINK IT IS VERY USFULL
AND THIS CODE IS FOUND IN MATLAB
function D = PandO(Param, Enabled, V, I)
% MPPT controller based on the Perturb & Observe algorithm.
% D output = Duty cycle of the boost converter (value between 0 and 1)
%
% Enabled input = 1 to enable the MPPT controller
% V input = PV array terminal voltage (V)
% I input = PV array current (A)
%
% Param input:
Dinit = Param(1); %Initial value for D output
Dmax = Param(2); %Maximum value for D
Dmin = Param(3); %Minimum value for D
deltaD = Param(4); %Increment value used to increase/decrease the duty cycle D
% ( increasing D = decreasing Vref )
%
persistent Vold Pold Dold;
dataType = 'double';
if isempty(Vold)
Vold=0;
Pold=0;
Dold=Dinit;
end
P= V*I;
dV= V - Vold;
dP= P - Pold;
if dP ~= 0 & Enabled ~=0
if dP < 0
if dV < 0
D = Dold - deltaD;
else
D = Dold + deltaD;
end
else
if dV < 0
D = Dold + deltaD;
else
D = Dold - deltaD;
end
end
else D=Dold;
end
if D >= Dmax | D<= Dmin
D=Dold;
end
Dold=D;
Vold=V;
Pold=P;
  • asked a question related to Algorithm Design
Question
5 answers
What are the current topics of research interest in the field of Graph Theory?
Relevant answer
Answer
Interesting suggestions posted so far. I would have to say this question is too vague. If you check any reputable journal to see what is being published, this is the best indication of what is currently of interest.
  • asked a question related to Algorithm Design
Question
3 answers
There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.
So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?
Relevant answer
Answer
I follow answers
regards
  • asked a question related to Algorithm Design
Question
18 answers
The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm?
Can you direct me to new research in this subject, or for approximate farthest point AFP?
Relevant answer
Answer
Thanks Vincent, a hybrid the proposed methods might be better, but there is only one way to find out, Experiments!
  • asked a question related to Algorithm Design
Question
10 answers
Dear experts,
Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.
Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.
The question is that:
How could we send D units of flow from s to t through these paths in the quickest time?
Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.
Relevant answer
Answer
@B. Jourquin
Dear Prof. Jourquin, many thanks for proposing these amazing algorithms. Although they could be applied to this problem under some specific considerations, they cannot handle it in general; these algorithms are designed to handle equilibrium systems (ES), while the above problem is an optimal system (OS), which is not equal to ES in general.
  • asked a question related to Algorithm Design
Question
1 answer
I have read a paper titled with "An enhanced honey bee mating optimization algorithm for the design of side sway steel frames". In this paper, an algorithm has presented to named "enhanced honey bee mating optimization". In this algorithm, mutation operator has been used to generate broods but mutation has been done by two parents (queen and brood). Is mutation done with two parents? Image of this algorithm has been uploaded here: https://pasteboard.co/HojP5TR.jpg
Thanks
Relevant answer
Answer
I think that mutation is possible in one parents?
  • asked a question related to Algorithm Design
Question
3 answers
Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?
Relevant answer
Answer
In the case of Euclidean graphs you can use the Euclidean distance between nodes. You can also use random weights. Depending on the application you can use appropriate weights...
  • asked a question related to Algorithm Design
Question
5 answers
When the time deviation between the annotation of the expert and what is labeled by a peak detection algorithm exceeds the tolerance limits (see the picture), should we consider it as a False Positive or as a False Negative?
Relevant answer
Answer
You have both, one FN because there is no detection close enough around the annotation, and one FP because there is no annotation close enough around the detection.
  • asked a question related to Algorithm Design
Question
2 answers
I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.
Relevant answer
Answer
Dear Imran,
I suggest you to see links in subject.
-EXPLORING ALGORITHMS FOR EFFECTIVE ... - Semantic Scholar
-Graph-Theoretic Concepts in Computer Science: 24th International ...
-Expanders via Random Spanning Trees
Best regards
  • asked a question related to Algorithm Design
Question
2 answers
I am working towards improving CBRP algorithm. I am planning to test my algorithm using the simulator. But first I need to simulate CBRP.
So need help in simulating CBRP using ns2 or ns3 algorithm.
Relevant answer
Answer
Excuse me. Do you have a simulation file of CBRP?
  • asked a question related to Algorithm Design
Question
3 answers
Map matching related paper
Map matching algorithm
Relevant answer
Answer
Map-matching algorithms are algorithms created to match the correct link on which a vehicle travels to determine the location of a vehicle on the right link or path. This is achieved by integrating position data with stochastic road network.
See:
  • asked a question related to Algorithm Design
Question
3 answers
Hi every body,
I conduct research in computer sciences and my research interests include:
Distributed Systems
Optimal Energy Management & Strategy
Electricity Market & Smart Grid
Algorithm Design for Integrated Multi-Energy Systems
I'm pursuing a Ph.D career concerning my research interests and skills. Is it alright if you recommend the academic job vacancies to me?
p.s. you can find attached my CV.
Thank you in advance.
Best,
  • asked a question related to Algorithm Design
Question
6 answers
Let's say I have a linear hydrocarbon molecule, like decane. For my simulation I need to feel a 3D volume with decane molecules, oriented randomly. Of course, they should not intersect.
Could someone point me to the algorithm or some way to solve this?
Relevant answer
Answer
packmol will also do the job.
  • asked a question related to Algorithm Design
Question
14 answers
Given a set of m (>0) trucks and a set of k (>=0) parcels. Each parcel has a fixed amount of payment for the trucks (may be same for all or may different for all) . The problem is to pick up the maximum number of parcels such that the profit of each truck is maximized. There may be 0 to k number of parcels in the service region of a particular truck. Likewise, a parcel can located in the service region of 0 to m trucks. There are certain constraints as follows.
1. Each truck can pick up exactly one parcel.
2. A parcel can be loaded to a truck if and only if it is located within the service region of the truck.
The possible cases are as follows
Case 1. m > k
Case 2. m = k
Case 3. m < k
As far as I know, to prove a given problem H as NP-hard, we need to give a polynomial time reduction algorithm to reduce a NP-Hard problem L to H. Therefore, I am in search of a similar NP-hard problem.
Kindly suggest some NP-hard problem which is similar to the stated problem. Thank you in advance.
Relevant answer
Answer
Let p_{ij} denote the profit gained if parcel j is loaded onto truck i. If the parcel cannot be loaded onto that particular truck, we just set p_{ij} to zero. It looks like we just need to solve the following 0-1 linear program:
maximise
\sum_{i=1}^m \sum_{j=1}^k p_{ij} x_{ij}
subject to
\sum_{i=1}^m x_{ij} \le 1 (for all j)
\sum_{j=1}^k x_{ij} \le 1 (for all i)
x_{ij} \in \{0,1\} (for all i and all j).
If that's right, the problem is very easy. As stated by Przemysław and Helmut, it is equivalent to the linear assignment problem, which in turn is equivalent to maximum-weight bipartite matching.
  • asked a question related to Algorithm Design
Question
2 answers
Let we want to find a collision for the hash function H. Suppose for every x and t, there exists an polynomial-time algorithm that can find y such that H(x) and H(y) are different in t bits. Obviously, if the algorithm can solve the problem for t = 0, then the hash function H is not safe. Is this algorithm a threat to the hash function H, in the case that it can solve the problem for t> 0?
  • asked a question related to Algorithm Design
Question
11 answers
Suppose c=E(m,k) be the bit representation of the encrypted value of the message m with the key k. Suppose, for each t, there exist an x that has the following relation:
W(E(m,x) Xor E(m,k))=t.
In this relation, W is the weight function that represents the number of 1s. Suppose there is an algorithm that can find x for each t>1. For what values of t, this algorithm can be considered a threat to block cipher E?
Relevant answer
Answer
As I said, m and c are fixed. Also, I know both of them.
  • asked a question related to Algorithm Design
Question
3 answers
I have seen random distributions being performed using already programmed algorithm, and then my question is how can it be random if it pre-decided ? Random is anything that happens naturally and cannot be predicted, imagine winning a lottery can be a quite random pick. But if they can be coded, can't we use it to , like winning a lottery ? Just a thought !!!
Relevant answer
Answer
In a random distribution, the probability of occurrence any one state has to be equal to the probablity occurrence of any other state, and there are supposed to be no consistently repetitive patterns. If pseudo-random, then those same rules apply to a point, before the sequence cycles. Which is a useful feature for things like cryptology, or perhaps even monte carlo simulations, where you want randomness, but you also need to be able to replicate the same random sequence.
You can demonstratet pseudo-randomness for yourself, easily enough. For instance, using a PRNG, paint a sequence of points, coordinates (x,y) on your monitor, where the x and y values are just taken from the sequence generated by the PRNG. You will see how the screen fills up with these dots. You will see that bare spots on the screen will fill in in due course.
Then, change the seed of the PRNG, and run it again. A completely different sequence should be evident, as the screen fills up with dots. What you want to witness is a uniform distribution of dots, over a short period of time.
  • asked a question related to Algorithm Design
Question
3 answers
The application of the bee algorithm in engineering:
Neural Network Learning for Patterns
Scheduling work for manufacturing machines
Information categorization
Optimized design of mechanical components
Multiple optimization
Fuzzy logic controllers for athlete robots
Relevant answer
Answer
hi?
Do I know exactly the data test on the bee algorithm that goes through the neural network?
  • asked a question related to Algorithm Design
Question
4 answers
The application of the bee algorithm in engineering:
Neural Network Learning for Patterns
Scheduling work for manufacturing machines
Information categorization
Optimized design of mechanical components
Multiple optimization
Fuzzy logic controllers for athlete robots
Relevant answer
Dear Maysam, why don't you choose a neural network achitecture to be optimised? Afterwards, given a determined training set, choose a metric, like overall accuracy or index kappa, and build your objective funcion. This is the key. Therefore, you will be able to use any metaheuristics-based optimization algorithm, including ABC, and compare their results.
  • asked a question related to Algorithm Design
Question
2 answers
I need to find a methodology for mark essay type answers with a given marking scheme.
Relevant answer
Answer
To answer this question best, a few things need clarification such as the grade level of students, are they shortnor long essays and the objective of the essay (Is it a responds essay to a given set of questions? Is it a creative writing essay?). However, I will try my best to explain.
You generally design what is called a rubric. A scoring rubric is an attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to delineate consistent criteria for grading.
Scoring rubrics include one or more dimensions on which performance is rated, definitions and examples that illustrate the attribute(s) being measured, and a rating scale for each dimension. Dimensions are generally referred to as criteria, the rating scale as levels, and definitions as descriptors.
The essay should have good grammar and show the right level of vocabulary. It should be organised, and the content should be appropriate and effective. When evaluating specific writing samples, you may also want to include other criteria for the essay based on material you have covered in class. You may choose to grade on the type of essay they have written and whether your students have followed the specific direction you gave. You may want to evaluate their use of information and whether they correctly presented the content material you taught. When you write your own rubric, you can evaluate anything you think is important when it comes to your students’ writing abilities.
The most straightforward evaluation uses a four-point scale for each of the criteria. Taking the criteria one at a time, articulate what your expectations are for an A paper, a B paper and so on. An A paper would be exemplary in all facets of the essay (content, grammar, structure, logic, sources etc). A B paper may show promise in these areas but be unclear in some aspects or lack originality and so on. It's generally up to you what you want the standards to be.
A university level rubric will generally look like the one attached to this answer.
I hope this helps!