Science topic

# Algorithm Design - Science topic

Explore the latest questions and answers in Algorithm Design, and find Algorithm Design experts.
Questions related to Algorithm Design
• asked a question related to Algorithm Design
Question
Hello everyone,
Could you recommend courses, papers, books or websites about algorithms that support missing values?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
• asked a question related to Algorithm Design
Question
Hi everyone.
I want optimization algorithm design of steel and concrete building.
Thanks.
Hi.
Im looking for two or more algorithms to optimize and reduce material of concrete and steel structure in legally way
• asked a question related to Algorithm Design
Question
Dear friends,
Would you please tell me where I can find a dynamic list (updated constantly) of new meta-heuristic algorithms?
Reza
For a humorous treatment, see the "bestiary":
• asked a question related to Algorithm Design
Question
Fragmentation trees are a critical step towards elucidation of compounds from mass spectra by enabling high confidence de novo identification of molecular formulas of unknown compounds (doi:10.1186/s13321-016-0116-8). Unfortunately, those algorithms suffer from long computation times making analysis of large datasets intractable. Recently, however, Fertin et al. (doi:10.1016/j.tcs.2020.11.021) highlighted additional properties of fragmentation graphs which could reduce computational times. Since their work is purely theoretical and lacks an implementation, I'm looking to partner up with someone to investigate and implement faster fragmentation tree algorithms. Could end up being a nice paper. Anyone interested?
• asked a question related to Algorithm Design
Question
Dear Researchers,
Do researchers/universities value students/researchers having published sequences to the OEIS?
Dear Marco Ripà ,
I have done both. I cited my work in the sequences and the sequences in my work.
• asked a question related to Algorithm Design
Question
Is it possible to use Artificial Intelligence (AI) in Biological and Medical Sciences to search databases for potential candidate drugs/genes to solve global problems without first performing animal studies?
We hypothesize that future generations of Artificial Intelligence (AI) technologies specifically adapted for biological sciences will help enable the reintegration of biology.
• asked a question related to Algorithm Design
Question
Good work by Juan, Electromagnetic aspects of ESPAR and digitally controllable scatterers with a look at low-complexity algorithm design
Dear Ohira-san,
Hopefully after the covid-19 I can visit Ohira-san. The ESPAR antenna was really a new direction created by Ohira-san. I was also very lucky to have the opportunity to investigate the ESPAR algorithms. Thank you for the latest papers. I will surely study the lasted papers of Ohira-san.
• asked a question related to Algorithm Design
Question
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Mathematics helps AI scientists to solve challenging deep abstract problems using traditional methods and techniques known for hundreds of years. Math is needed for AI because computers see the world differently from humans. Where humans see an image, a computer will see a 2D- or 3D-matrix. With the help of mathematics, we can input these dimensions into a computer, and linear algebra is about processing new data sets.
Here you can find good sources for this:
• asked a question related to Algorithm Design
Question
I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.
Dear Mirzakhmet, could you specify what concretely you have in mind when saying "to start from examples practicing deductive style…"?
When referring to studying deductive structures as a way of support the development of algorithmic skills, I have not considered, necessarily, any particular approach to study them. Probably, a good introduction could be presenting problems, where deductive approach is a recommended way for solving problems.
• asked a question related to Algorithm Design
Question
What kind of software or what kind of method could be used to manage the huge amount of paper, so that you could find whatever paper you have read fast.
I suggest Mendeley and Endnote. But Mendeley is user friendly software.
• asked a question related to Algorithm Design
Question
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
You have to explain time and space complexity of your algorithm
• asked a question related to Algorithm Design
Question
If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?
Any assistance or refer to scientific papers that can help me?
Dear Mostafa Elsersy,
You can look at the following data:
What are algorithms used by websites?
An algorithm refers to the formula or process used to reach a certain outcome. In terms of online marketing, an algorithm is used by search engines to rank websites on search engine results pages (SERPs).
What is SEO algorithm?
As mentioned previously, the Google algorithm uses keywords as part of the ranking factors to determine search results. The best way to rank for your specific keywords is by doing SEO. SEO is essentially a way to tell search engines that a website or web page is about a particular topic.
How do you rank a website?
Follow these suggestions to improve your search engine optimization (SEO) and watch your website rise the ranks to the top of search-engine results.
1. Publish Relevant, Authoritative Content. ...
2. Update Your Content Regularly. ...
4. Have a link-worthy site. ...
5. Use alt tags.
• asked a question related to Algorithm Design
Question
I need a Data Sets about Container Stacking Problem for experimentation with differents algorithms
I have the same problem
• asked a question related to Algorithm Design
Question
What if I wanted to match 2 individuals based on their Likert scores in a survey?
Example: Imagine a 3 question dating app where each respondent chooses one from the following choices in response to 3 different statements about themselves:
Strongly Disagree - Disagree - Neutral - Agree - Strongly Agree
1) I like long walks on the beach.
2) I always know where I want to eat.
3) I will be 100% faithful.
Assuming both subjects answer truthfully and that the 3 questions have equal weights, What is their % match for each question and overall? How would I calculate it for the following answers?
1) Strongly Agree
2) Strongly Disagree
3) Agree
1) Agree
2) Strongly Agree
3) Strongly Disagree
What if I want to change the weight of each question?
Thanks!
Terry
Daniel Wright and Remal Al-Gounmeein thanks for the links, I will take a look. We are matching respondents based on their likert scale (5) responses to 16 partisan political positions.
• asked a question related to Algorithm Design
Question
Why Particle Swarm Optimization works better for this classification problem?
Can anyone give me any strong reasons behind it?
Arash Mazidi PSO is also in various classification problems. I particularly use it for Phishing website datasets.
• asked a question related to Algorithm Design
Question
Let consider there is a selling factor like this:
Gender | Age | Street | Item 1 | Count 1 | Item 2 | Count 2 | ... | Item N | Count N | Total Price (Label)
Male | 22 | S1 | Milk | 2 | Bread | 5 | ... | - | - | 10 $Female | 10 | S2 | Cofee | 1 | - | - | ... | - | - | 1$
....
We want to predict the total price for a factor based on their buyer demographic information (like gender, age, job) and also their buying items and counts. It should be mentioned that we suppose that we don't know each item's price and also, the prices will be changed during the time (so, we although will have a date in our dataset).
Now it is the main question that how we can use this dataset that contains some transactional data (items) which their combination is not important. For example, if somebody buys item1 and item2, it is equal to other guys who buy item2 and item1. So, the values of our items columns should not have any differences for their value orders.
This dataset contains both multivariate and transactional data. My question is how can we predict the label more accurately?
Hi Dr Behzad Soleimani Neysiani . I agree with Dr Qamar Ul Islam .
• asked a question related to Algorithm Design
Question
Hello all, is there any MATLAB code for Adaptive Data Rate for LoRaWAN in terms of secure communication?
Adnan Majeed Thank you for your reply, but the problem still remains. I need an algorithm or MATLAB code related to ADR for LoRaWAN. If you have such details then please share that here. Thank you.
• asked a question related to Algorithm Design
Question
I was exploring federated learning algorithms and reading this paper (https://arxiv.org/pdf/1602.05629.pdf). In this paper, they have average the weights that are received from clients as attached file. In the marked part, they have considered total client samples and individual client samples. As far I have learned that federated learning has introduced to keep data on the client-side to maintain privacy. Then, how come the server will know this information? I am confused about this concept.
Any clarification?
Thanks for your input. I have their codes. They have followed the same. I have attached their code below.
• asked a question related to Algorithm Design
Question
how to calculate number of computations and parameters of our customized deep learnig algorithm designed with MATLAB
• asked a question related to Algorithm Design
Question
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
• asked a question related to Algorithm Design
Question
I have written a review paper. The work does not contain any experimental works but it contains a very elaborate review of the antenna design techniques - starting from the analytical models of the 1980s through the simulation-based designs of the 2000s and finally the computer-aided algorithmic design - the recent trends. The paper is rejected by several journals. The reviewers suggest that I should include some experimental results implementing some of the reviewed works and compare them. This suggestion is not feasible because I am already running out of the page limit of most journals. Further, the exact implementation of most of the techniques reviewed is not possible because the papers generally don't provide a complete picture of how the work is done. There are always some missing pieces in papers.
Can anyone suggest to me any journal or upcoming book where I can publish it?
• asked a question related to Algorithm Design
Question
if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?
Please read the paper of Energy and Wiener index of Total graph over Ring Zn . In this paper I calculated the distance between two nodes.
• asked a question related to Algorithm Design
Question
Hello!
namely that I will implement a vehicle in node A and will then use dijkstra's algorithm to reach the shortest route. Example A is going to B, I would like to make a timer that shows me how long it takes to go from A to B. How can I implement a timer on java?
i mean in which class? Wang Ting Dong
• asked a question related to Algorithm Design
Question
When creating & optimizing mathematical models with multivariate sensor data (i.e. 'X' matrices) to predict properties of interest (i.e. dependent variable or 'Y'), many strategies are recursively employed to reach "suitably relevant" model performance which include ::
>> preprocessing (e.g. scaling, derivatives...)
>> variable selection (e.g. penalties, optimization, distance metrics) with respect to RMSE or objective criteria
>> calibrant sampling (e.g. confidence intervals, clustering, latent space projection, optimization..)
Typically & contextually, for calibrant sampling, a top-down approach is utilized, i.e., from a set of 'N' calibrants, subsets of calibrants may be added or removed depending on the "requirement" or model performance. The assumption here is that a large number of datapoints or calibrants are available to choose from (collected a priori).
Philosophically & technically, how does the bottom-up pathfinding approach for calibrant sampling or "searching for ideal calibrants" in a design space, manifest itself? This is particularly relevant in chemical & biological domains, where experimental sampling is constrained.
E.g., Given smaller set of calibrants, how does one robustly approach the addition of new calibrants in silico to the calibrant-space to make more "suitable" models? (simulated datapoints can then be collected experimentally for addition to calibrant-space post modelling for next iteration of modelling).
:: Flow example ::
N calibrants -> build & compare models -> model iteration 1 -> addition of new calibrants (N+1) -> build & compare models -> model iteration 2 -> so on.... ->acceptable performance ~ acceptable experimental datapoints collectable -> acceptable model performance
Dear Sindhuraj Mukherjee,
I suggest you to see links and attached files on topic.
Best regards
• asked a question related to Algorithm Design
Question
We have an application which converts a netlist into a schematic drawing. It works fine, except the result is extremely ugly.
We would like to know if anyone can write a place-and-route algorithm that produces an aesthetically more pleasing picture, with a view to technical cooperation on the project,
(Our software is written in 'C')
If I understand correctly, this will be part of a commercial product, so I planned to exchange my C program for a four week holiday in Australia, but now Olatunji A Shobande is interested in the algorithm, too, and I guess I cannot expect him to pay for my holiday in Scotland as well :-) ... so, since this is RG let's make the algorithm open source:
There are two main points:
1. Being a visual thinker myself, often I advice my students to redraw the schematics given in exercises in such a way that the height of the electrical potential is reflected by the height in the drawing, i. e. by the y value.
So, first the electrical potentials of the wires are determined or at least upper and lower limits estimated. With sources, this is straightforward; with transistors, we can apply the polarities in normal operation. With resistors in series, one can apply the voltage divider rule. Finally, in cases where only voltage ranges apply the mean value between upper and lower limit is taken. In the example, we get:
p0 = 0V, p1 = 3.7 V, p2 = 12.6 V, p3 = 8.1 V, p4 = 8.1 V, p6 = 7.5 V, p7 = 12 V, p8 = 12 V, p9 = 7.5 V, p10 = 7.5 V, p11 = 15 V
Then y values are assigned to the wires, in the order of increasing potential, the step width being determined by the maximum height of the drawn components. I chose a step width of 2, resulting in:
y0 = 0, y1 = 2, y2 = 18, y3 = 10, y4 = 12, y6 = 4, y7 = 14, y8 = 16, y9 = 6, y10 = 8, y11 = 20
This gives 11 horizontal rails with sufficient distance between adjacent rails for the placement of elements.
The extension in x dimension has to be larger than the number of components (transistors counting 2).
2. Usually, the signal path will start on the left side, and will run through the active components to the right. Therefore, placing starts with the active components. The first transistor placed is simply the first one in the netlist. This placing is triggered by a function which just scans all transistors not yet placed in the netlist. But the placing itself is done by a recursive function which determines if there are further transistors connected directly to the current transistor, and if so, where the largest number of transistors is. Then the function calls itself for one of these transistors.
In order to avoid backtracking to already completely harvested wires, one function parameter is the number of the wire connecting the current transistor to the next, and another parameter is a number which is compared to the number of adjacent transistors at the current wire. During the first call it is set to 2, during further calls it is incremented by 1. If this number equals the number of transistors then the current wire is abandoned.
In the example, the series is: Q1, Q2, Q6, Q5, Q3, Q4
Each transistor is placed on increasing x values, at the moment with step width 2. However, a considerably larger step width might be preferable in conjunction with the final x compression (see below).
If there were further clusters of transistors, the placing would continue with Q7.
The recursive function described above could be supplemented by another with the ability to "look" beyond one resistor or capacitor. In this way, the signal path could be traced even if the active components are coupled by passive ones. (end of point 2)
The placing of the passive components is done in such a way as to avoid increasing the length of wires if possible. The placing of the sources is done last.
Two steps remain which are not yet completely implemented: x compression (decreasing the distances in x dimension) and y compression (placing several rails on the same y value if they are non-overlapping).
Three linked lists are employed: A 1D one for the components and a second 1D one for the wires. The third one is 2D for the placement.
Placing and routing is not my field of expertise, so I don't know how my solution compares to existing ones. Anyway, this was a nice exercise.
• asked a question related to Algorithm Design
Question
Say for arguments sake I have two or more images with different degrees of blurring.
Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?
Best regards and thanks
• asked a question related to Algorithm Design
Question
Some companies sell our private information (consumption habits, interests, health, etc.). There must be a way to protect yourself against this. An algorithm could simulate data in order to confuse companies. This may have an unwanted impact.
Perhaps you can visit:
• asked a question related to Algorithm Design
Question
I am working on a regression task where I am trying to predict future values of a stock/resource. At the moment, my model uses a large set of lags as input features and I want to use feature selection (FS) to select only the important lags needed by the model. However, most FS algorithms seem to be based on classification models so I am wondering whether I can create a 'proxy' classifier which uses the same input data as my regression model but whose outputs are now discretized versions of the regression outputs (i.e. simplest case 0=increase in stock, 1=decrease). Would the selected features from this proxy model serve as 'good' features for the regression model or should I only use FS algorithms designed for regression? I would be most grateful for any suggestions, particularly if they are referenced from previous research papers on the topic.
You do not need to change output variable values from continuous to categories (e.g. 0s or 1s). Feature selection works on regression setting too; only difference is the performance measure. For example in classification setting, prediction accuracy, precision, sensitivity, AUC, etc. can be used. For continuous response setting, you can use R square, Mean Squared Error, Root-Mean-Squared-Error, etc.
• asked a question related to Algorithm Design
Question
How can one evaluate a new approach to steganography the message in cover image, except the MSE and PSNR
The proposed algorithm has binary codes and pixels inside an image
• asked a question related to Algorithm Design
Question
Dear friend,
These days, I'm trying to finish my Ph.D. in electrical engineering (control engineering). I'm specialized in Extended-Kalman fitter application, fault detection, neural network, digital transportation, digital twins and machine learning respectively. It is necessary to say that, my thesis about industry 4.0 at pipeline performance management (software). I'm enthusiastic to join a team for my postdoc. And I struggle to study at the edge of science topics for my postdoc.
Would you help me please to join in the on the team for my postdoc, study abroad or a way to join in this kind of program?
Check out CSIRO website :
• asked a question related to Algorithm Design
Question
What are the characteristics of small objects and how to design an algorithm based on the characteristics ? To my knowledge, feature fusion and context learning are usually used to improve object detection. However, it is hard to why they improve the detection results. Are there some algorithms designed just for small object detection ?
you could refer to this paper
• asked a question related to Algorithm Design
Question
It's no longer a surprise to realize a wide gap between advances in academic research and practicality in Industry. This discussion is about exploring this gap for a particular domain, which is Time-Series Forecast. The topic has had great many research advances in recent years, since researchers have identified promises offered by Deep Learning (DL) architectures for this domain. Thus, as evident in recent research gatherings, researchers are racing to perfect the DL architectures for taking over the time-series forecast problems. Nevertheless, the average industry practitioner remains reliant on traditional statistical methods for understandable reasons. Probably the biggest reason of all is the ease of interpretation (i.e. interpretability) offered by traditional methods, but many other reasons are valid as well, such as: ease of training, deployment, robustness, etc. The question is: If we were to reinvent a machine learning solution solely for industrial applicability, considering the current and future industry needs, then what attributes should this solution possess? Interpretability, Manipulability, Robustness, Self-maintainability, Inferability, online-updatability, something else?
From my personal experience, Deep Learning (DL) offers NO substantial improvement in time series forecasting accuracy vs. traditional methods, such as Holt-Winters, exponential and double exponential smoothing with or without trends, Holt-Winters with seasonality and trends, autoregressive integrated moving average (ARIMA), digital filtering, etc.
Regardless of the concrete technique (including deep learning and recursive neural networks in AI), all time series forecasting methods are based on the use of the historical past data to extrapolate them into the future. The fundamental assumption of any forecasting technique (implicit or explicit) is that time series represents a stable pattern that can be identified and then extended into the future. If a pattern of the past data-points is not statistically stable, then no meaningful future prediction (forecasting) is possible regardless of the sophistication of the forecasting technique (DL or not).
It seems reasonable to assume that too ‘old’ data-points do not practically affect (correlated to) the most recent data-points, let alone the future data. The data-points that are strongly correlated to the newer ones can be used for making the forecast. The data-points that are weakly correlated to the newer ones (or not correlated at all) should not be included for forecasting; otherwise, the forecast will likely be skewed. Thus, the use of a large number of the available past data-points worth of several years of observations is not needed for making a meaningful forecast; in fact, the large number of the past data-points can be detrimental to producing a reasonably accurate forecast regardless of the used technique (deep learning or not). The problem of the optimal number of the past data points (training set size) of a time series should be the primary focus for industrial applicability of the forecasting rather than a forecasting technique (DL or not).
• asked a question related to Algorithm Design
Question
I am doing a study on temperature compensation study on fruit dry matter prediction using NIRS spectra. As I don't know much about matlab and mostly perform my multivariate regression using Unscrambler software, I am looking for simplified version of external parameter orthogonalization algorithm.
I'm using MATLAB: PLS toolbox for analyzing my data and i have a problem using EPO preprocessing for PLS-DA.
when im using EPO i get this error which the toolbox cannot perform cross validation. can anyone help me with this situation?
• asked a question related to Algorithm Design
Question
What is the easiest software that can be used to design a medical algorithm flowchart?
Can you recommend any free software adaptable to be used with Macbook?
Hello,
You can use Microsoft Visio to drawing flowcharts, engineering designs, diagrams and etc.
Best wishes,
Saber
• asked a question related to Algorithm Design
Question
Hello,
I am interested in using Landsat 5-8 images to map snow and ice cover. I am trying to construct a time series showing how late into the year snow and ice cover lasts. I noticed that for Landsat ARD tiles obtained from USGS Earth Explorer there is a Pixel Quality Assessment band that accompanies surface reflectance products and that this PQA raster includes bit designations for pixels where snow or ice are present (bits 80 and 144 for Landsat 4/5). After reading more I have gathered that this PQA product is generated using the Fmask algorithm which was developed primarily for generating cloud masks. However, I decided to employ these products to see how they perform when generating fractional snow cover rasters.
I noticed that for some images very late into the year (May and June) the Fmask algorithm did classify many pixels as snow or ice, although after generating RGB composites and using the thermal infrared band to look at temperature, I determined that there was no snow or ice cover present in the image although it did look like some clouds were present. After reading more of the literature I found out that the Fmask algorithm has a tendency to sometimes classify cloud pixels as snow or ice, but I could not find an explanation as to why this happens. Is there a particular cloud type that the algorithm classifies as snow or ice, or is it unpredictable? Is there a better algorithm that is designed specifically for generating maps of fractional snow cover?
Thanks for you help,
Best,
Ryan Lennon
can be isolated using different spectral signature between cloud cover and ice cover
• asked a question related to Algorithm Design
Question
By changing the resistive load value used for stand-alone operation to get better realization of Id,Iq tracking by the designed controller?
I suggest you to see links and attached files on topic.
A Systematic Method for Designing a PR Controller and Active ...
An Improved Current Control Strategy for a Grid-Connected Inverter ...
A Current-Control Strategy for Voltage-Source Inverters in Microgrids ...
Robust Nonlinear Controller Design for Three Phase Grid Connected ...
Best regards
• asked a question related to Algorithm Design
Question
The constraints include both linear constraints and nonlinear constraints. The essential issue lies in to how to deal with the nonlinear constraints.
It would be better if this algorithm can transform these nonlinear constraints into the equivalent linear ones.
• asked a question related to Algorithm Design
Question
Hello
I am trying to design a nature-inspired algorithm(heuristic algorithm) for robotic path planning. But I wonder if there are reference about designing heuristic algorithms and about the general steps or general form to design these sort of algorithms.
There are several methods already published with several publishers
• asked a question related to Algorithm Design
Question
We are facing a true tsunami of “novel” metaheuristic. Are all of them useful? I referred you to Sorensen - Intl. Trans. in Op. Res. (2013) 1-16, DOI: 10.1111/itor.12001.
there are people that thinks so. The reason is that apear every week a new algorithm which seeasm to be a copy from other only with different names
• asked a question related to Algorithm Design
Question
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
• asked a question related to Algorithm Design
Question
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
• asked a question related to Algorithm Design
Question
According to Shannon in classical information theory H(x) > f(H(x)) for an entropy H(x) over some random variable x, with an unknown function f. Is it possible that an observer that doesn’t know the function (that produces statistically random data) can take the output of the function and consider it random (entropy)? Additionally, if one were to use entropy as a measure between two states, what would that ‘measure’ be between the statistically random output and the original pool of random?
I imagine the approximate entropy notion may apply as an exercise of calculation. I am not sure the entropy and the approximate entropy reflect the same meaning.
The approximate entropy seems closer to the statistics, but Shannon entropy is closer to information content.
• asked a question related to Algorithm Design
Question
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
• asked a question related to Algorithm Design
Question
Is there a standard formula in getting the evaluation fitness in the Cuckoo Search algorithm? Or can any formula be used in evaluating the fitness?
Hope you can help me.
This should be built in base on the particular reuqriments.
• asked a question related to Algorithm Design
Question
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.
Feel free to try it and comment your thoughts on it! I'm happy for any feedback.
Thank you
Michael
nice thinking
• asked a question related to Algorithm Design
Question
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
You may try Backtracking Search Optimization Algorithm.
• asked a question related to Algorithm Design
Question
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
• asked a question related to Algorithm Design
Question
i am developing my algorithm at weka tools. then need to compare my algorithm with basic collaborative filtering. there is Nearest Neighbor (also known as Collaborative Filtering) at weka? thanks
http://csci.viu.ca/~barskym/teaching/DM2012/labs/LAB5/WekaRecommender.java provides a preliminary example for Collaborative Filtering using k-nearest neighbours.
• asked a question related to Algorithm Design
Question
genetic algorithm
design optimization
wind turbine
Sinan Salih Thank you so much i will contact you shortly if i need more clarifications...
• asked a question related to Algorithm Design
Question
I would like to know if there is a tool that can be used to verify the effectiveness of algorithms designed for solving games?
• asked a question related to Algorithm Design
Question
Any recomendations would be great.
SEE THIS LINK I THINK IT IS VERY USFULL
AND THIS CODE IS FOUND IN MATLAB
function D = PandO(Param, Enabled, V, I)
% MPPT controller based on the Perturb & Observe algorithm.
% D output = Duty cycle of the boost converter (value between 0 and 1)
%
% Enabled input = 1 to enable the MPPT controller
% V input = PV array terminal voltage (V)
% I input = PV array current (A)
%
% Param input:
Dinit = Param(1); %Initial value for D output
Dmax = Param(2); %Maximum value for D
Dmin = Param(3); %Minimum value for D
deltaD = Param(4); %Increment value used to increase/decrease the duty cycle D
% ( increasing D = decreasing Vref )
%
persistent Vold Pold Dold;
dataType = 'double';
if isempty(Vold)
Vold=0;
Pold=0;
Dold=Dinit;
end
P= V*I;
dV= V - Vold;
dP= P - Pold;
if dP ~= 0 & Enabled ~=0
if dP < 0
if dV < 0
else
end
else
if dV < 0
else
end
end
else D=Dold;
end
if D >= Dmax | D<= Dmin
D=Dold;
end
Dold=D;
Vold=V;
Pold=P;
• asked a question related to Algorithm Design
Question
What are the current topics of research interest in the field of Graph Theory?
Interesting suggestions posted so far. I would have to say this question is too vague. If you check any reputable journal to see what is being published, this is the best indication of what is currently of interest.
• asked a question related to Algorithm Design
Question
There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.
So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?
regards
• asked a question related to Algorithm Design
Question
The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm?
Can you direct me to new research in this subject, or for approximate farthest point AFP?
Thanks Vincent, a hybrid the proposed methods might be better, but there is only one way to find out, Experiments!
• asked a question related to Algorithm Design
Question
Dear experts,
Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.
Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.
The question is that:
How could we send D units of flow from s to t through these paths in the quickest time?
Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.
• asked a question related to Algorithm Design
Question
I have read a paper titled with "An enhanced honey bee mating optimization algorithm for the design of side sway steel frames". In this paper, an algorithm has presented to named "enhanced honey bee mating optimization". In this algorithm, mutation operator has been used to generate broods but mutation has been done by two parents (queen and brood). Is mutation done with two parents? Image of this algorithm has been uploaded here: https://pasteboard.co/HojP5TR.jpg
Thanks
I think that mutation is possible in one parents?
• asked a question related to Algorithm Design
Question
Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?
In the case of Euclidean graphs you can use the Euclidean distance between nodes. You can also use random weights. Depending on the application you can use appropriate weights...
• asked a question related to Algorithm Design
Question
When the time deviation between the annotation of the expert and what is labeled by a peak detection algorithm exceeds the tolerance limits (see the picture), should we consider it as a False Positive or as a False Negative?
You have both, one FN because there is no detection close enough around the annotation, and one FP because there is no annotation close enough around the detection.
• asked a question related to Algorithm Design
Question
I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.
Dear Imran,
I suggest you to see links in subject.
-EXPLORING ALGORITHMS FOR EFFECTIVE ... - Semantic Scholar
-Graph-Theoretic Concepts in Computer Science: 24th International ...
-Expanders via Random Spanning Trees
Best regards
• asked a question related to Algorithm Design
Question
I am working towards improving CBRP algorithm. I am planning to test my algorithm using the simulator. But first I need to simulate CBRP.
So need help in simulating CBRP using ns2 or ns3 algorithm.
Excuse me. Do you have a simulation file of CBRP?
• asked a question related to Algorithm Design
Question
Map matching related paper
Map matching algorithm
Map-matching algorithms are algorithms created to match the correct link on which a vehicle travels to determine the location of a vehicle on the right link or path. This is achieved by integrating position data with stochastic road network.
See:
• asked a question related to Algorithm Design
Question
Hi every body,
I conduct research in computer sciences and my research interests include:
Distributed Systems
Optimal Energy Management & Strategy
Electricity Market & Smart Grid
Algorithm Design for Integrated Multi-Energy Systems
I'm pursuing a Ph.D career concerning my research interests and skills. Is it alright if you recommend the academic job vacancies to me?
p.s. you can find attached my CV.
Best,
• asked a question related to Algorithm Design
Question
Let's say I have a linear hydrocarbon molecule, like decane. For my simulation I need to feel a 3D volume with decane molecules, oriented randomly. Of course, they should not intersect.
Could someone point me to the algorithm or some way to solve this?
packmol will also do the job.
• asked a question related to Algorithm Design
Question
Given a set of m (>0) trucks and a set of k (>=0) parcels. Each parcel has a fixed amount of payment for the trucks (may be same for all or may different for all) . The problem is to pick up the maximum number of parcels such that the profit of each truck is maximized. There may be 0 to k number of parcels in the service region of a particular truck. Likewise, a parcel can located in the service region of 0 to m trucks. There are certain constraints as follows.
1. Each truck can pick up exactly one parcel.
2. A parcel can be loaded to a truck if and only if it is located within the service region of the truck.
The possible cases are as follows
Case 1. m > k
Case 2. m = k
Case 3. m < k
As far as I know, to prove a given problem H as NP-hard, we need to give a polynomial time reduction algorithm to reduce a NP-Hard problem L to H. Therefore, I am in search of a similar NP-hard problem.
Kindly suggest some NP-hard problem which is similar to the stated problem. Thank you in advance.
Let p_{ij} denote the profit gained if parcel j is loaded onto truck i. If the parcel cannot be loaded onto that particular truck, we just set p_{ij} to zero. It looks like we just need to solve the following 0-1 linear program:
maximise
\sum_{i=1}^m \sum_{j=1}^k p_{ij} x_{ij}
subject to
\sum_{i=1}^m x_{ij} \le 1 (for all j)
\sum_{j=1}^k x_{ij} \le 1 (for all i)
x_{ij} \in \{0,1\} (for all i and all j).
If that's right, the problem is very easy. As stated by Przemysław and Helmut, it is equivalent to the linear assignment problem, which in turn is equivalent to maximum-weight bipartite matching.
• asked a question related to Algorithm Design
Question
Let we want to find a collision for the hash function H. Suppose for every x and t, there exists an polynomial-time algorithm that can find y such that H(x) and H(y) are different in t bits. Obviously, if the algorithm can solve the problem for t = 0, then the hash function H is not safe. Is this algorithm a threat to the hash function H, in the case that it can solve the problem for t> 0?
• asked a question related to Algorithm Design
Question
Suppose c=E(m,k) be the bit representation of the encrypted value of the message m with the key k. Suppose, for each t, there exist an x that has the following relation:
W(E(m,x) Xor E(m,k))=t.
In this relation, W is the weight function that represents the number of 1s. Suppose there is an algorithm that can find x for each t>1. For what values of t, this algorithm can be considered a threat to block cipher E?
As I said, m and c are fixed. Also, I know both of them.
• asked a question related to Algorithm Design
Question
I have seen random distributions being performed using already programmed algorithm, and then my question is how can it be random if it pre-decided ? Random is anything that happens naturally and cannot be predicted, imagine winning a lottery can be a quite random pick. But if they can be coded, can't we use it to , like winning a lottery ? Just a thought !!!
In a random distribution, the probability of occurrence any one state has to be equal to the probablity occurrence of any other state, and there are supposed to be no consistently repetitive patterns. If pseudo-random, then those same rules apply to a point, before the sequence cycles. Which is a useful feature for things like cryptology, or perhaps even monte carlo simulations, where you want randomness, but you also need to be able to replicate the same random sequence.
You can demonstratet pseudo-randomness for yourself, easily enough. For instance, using a PRNG, paint a sequence of points, coordinates (x,y) on your monitor, where the x and y values are just taken from the sequence generated by the PRNG. You will see how the screen fills up with these dots. You will see that bare spots on the screen will fill in in due course.
Then, change the seed of the PRNG, and run it again. A completely different sequence should be evident, as the screen fills up with dots. What you want to witness is a uniform distribution of dots, over a short period of time.
• asked a question related to Algorithm Design
Question
The application of the bee algorithm in engineering:
Neural Network Learning for Patterns
Scheduling work for manufacturing machines
Information categorization
Optimized design of mechanical components
Multiple optimization
Fuzzy logic controllers for athlete robots
hi?
Do I know exactly the data test on the bee algorithm that goes through the neural network?
• asked a question related to Algorithm Design
Question
The application of the bee algorithm in engineering:
Neural Network Learning for Patterns
Scheduling work for manufacturing machines
Information categorization
Optimized design of mechanical components
Multiple optimization
Fuzzy logic controllers for athlete robots
Dear Maysam, why don't you choose a neural network achitecture to be optimised? Afterwards, given a determined training set, choose a metric, like overall accuracy or index kappa, and build your objective funcion. This is the key. Therefore, you will be able to use any metaheuristics-based optimization algorithm, including ABC, and compare their results.
• asked a question related to Algorithm Design
Question
I need to find a methodology for mark essay type answers with a given marking scheme.
To answer this question best, a few things need clarification such as the grade level of students, are they shortnor long essays and the objective of the essay (Is it a responds essay to a given set of questions? Is it a creative writing essay?). However, I will try my best to explain.
You generally design what is called a rubric. A scoring rubric is an attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to delineate consistent criteria for grading.
Scoring rubrics include one or more dimensions on which performance is rated, definitions and examples that illustrate the attribute(s) being measured, and a rating scale for each dimension. Dimensions are generally referred to as criteria, the rating scale as levels, and definitions as descriptors.
The essay should have good grammar and show the right level of vocabulary. It should be organised, and the content should be appropriate and effective. When evaluating specific writing samples, you may also want to include other criteria for the essay based on material you have covered in class. You may choose to grade on the type of essay they have written and whether your students have followed the specific direction you gave. You may want to evaluate their use of information and whether they correctly presented the content material you taught. When you write your own rubric, you can evaluate anything you think is important when it comes to your students’ writing abilities.
The most straightforward evaluation uses a four-point scale for each of the criteria. Taking the criteria one at a time, articulate what your expectations are for an A paper, a B paper and so on. An A paper would be exemplary in all facets of the essay (content, grammar, structure, logic, sources etc). A B paper may show promise in these areas but be unclear in some aspects or lack originality and so on. It's generally up to you what you want the standards to be.
A university level rubric will generally look like the one attached to this answer.
I hope this helps!
• asked a question related to Algorithm Design
Question
Hi, does anyone know if it exist an algorithm capable of changing a written line by maintaining say, number of syllables, accents, etc., but changing words in non words, so removing the meaning of the line?
Thank you
Also, is your goal to somehow encode the text? In that case, you should take care that the process (the algorithm you are looking for) should work both ways... change the text and then being able to return to the original.
Good luck,
VP
• asked a question related to Algorithm Design
Question
The structure of this problem is similar (not equal) to other problems that admits simple solutions. Maybe, the colleagues of this community could help me in identifying a solution to this problem.
Dear Sir
1. An Introduction to Computational Fluid Dynamics The Finite Volume Method, 2/e By Versteeg
• asked a question related to Algorithm Design
Question
Problem Statement:
A television manufacturer has decided to produce and sell two different types of TV sets, small and big. They assure that the small will give a profit of $300 per unit and the big a profit of$500 per unit. They have one production plant with four departments: molding, soldering, assembly and inspection. Each TV set is processed in sequence through these four departments. Each department has a limited capacity given by a maximum number of working hours per year. We assume that they can sell all the TV sets they are able to produce and the market is not a restriction.
Objective function:
•Max Z= 300x1+500x2……….(1)
Subject to the following constraints:
•x1+5x2<=4000………………(2) [Molding capacity]
•x1+x2<=1200………………..(3) [Soldering capacity]
•2x1+x2<=2000………………(4) [Assembly capacity]
•2x1+5x2<=5000……………..(5) [Inspection capacity]
x1,x2>=0……………………..(6) [Non-negativity Constraints
The dual is given by:
Min Z= 4000u1+1200u2+2000u3+5000u4
s.t. u1+u2+2u3+2u4>=300 [Small TV Sets]
5u1+u2+u3+5u4>=500 [Big TV Sets]
u1,u2,u3,u4>=0
My question is: What is the physical interpretation of the dual variables or u? Is it cost/hr or profit/hr for a specific operation?
If I consider cost, then the objective function is OK. But in case of constraints, cost cannot be greater than profit. Again, if I consider profit, then the constraints are OK. But the objective function profit cannot be minimized. I have gone through many books, but still I am confused. If anybody kindly helps me, I shall be highly obliged.
Dear Parag
1. The dual values or shadow prices produced by solving the dual equation set that you display, are values for the constraints and they have the same units as the objective function. However, you don’t need to solve the dual, since once you solve the primal problem you also get the dual values.
2. For a certain constraint, say ‘Inspection capacity’ its dual value indicates how much will the objective function increase when this capacity is incremented in one unit.
Let’s see:
The solution to your primal problem is x1 = 933 units an x2 = 133 units with a Z=  $346,667. The respective shadow prices are only for constraints ‘Soldering capacity’ and ‘Assembly capacity’ which respective values are: u2= 233.33 and u3 = 33.33. The other restrictions don’t have shadow prices and thus don’t play any role for this objective. Therefore, these are the only two restrictions or criteria you are interested on. Consequently, if you increment one unit the soldering capacity from 1200 to 1201, the functional Z will increase as Z + u2 = 346,667 + 233=$346,900 (rounding).
If you further increment in one unit the assembly capacity from 2000 to 2001, the functional Z will increase as Z + u3 = 346,900 + 33 = \$346,933.
I did this by each constraint but you can increment both criteria simultaneously and get the same results.
Now, how much can you increment the soldering capacity? Indefinitely?
No, you can increment its capacity up to a certain limit, and the same for the assembly capacity. These limits are given automatically by the LP software.
Within these limits the increase of Z is constant and equal to the respective shadow price or to their sum, that is, it is a straight line with the same slope for different increments, and you can also have it graphically displayed.
If you increment either of both criteria past their respective limits, then, one of these criteria or may be both will be no longer significant and other will take their place.
The increase of the objective function will then change and it will be a curve instead of a straight line, because the slope changed since you have now a new shadow price corresponding to the new criterion.
3. Why can’t the objective function be minimized?
Of course it can!
If instead of profits you put costs as Z coefficients, they will be minimized.
The same concept as explained holds. The only difference is that a unit increment of the corresponding criterion will decrease the Z function value, which is what you want.  If instead of incrementing one unit you decrement one unit, the same holds, and Z will increase.
4. I don’t understand your last paragraph. If you consider costs, your objective function must be minimized and not maximized as you imply with your ‘OK’ .
In addition, in your example, constraints are not related with profits and costs; they are only related to the available existing capacities.
Suppose that you want to minimize your cost. You can do that, however, because in your example all criteria call for maximization the minimum cost of the solutions polygon will be 0, that is, no solution. You can see this graphically and very easily since you have only two products. You will see that the minimum point of the polygon that the Z can tangent is the origin of coordinates, or zero.
In you example  you can change it by changing one of the criterion to a minimum, say for instance  ‘Inspection capacity’ and you also have to reduce  the availability, for instance  to 3000 from 5000. In that case the optimal solution is to produce 600 units of  x2, and zero units of x1.
5. I don’t know which LP software you are using but I recommend SOLVER that is an Excel Add in. There you will see that you can choose maximise or minimize the objective function and even equalizing it.
I believe that you are confused regarding criteria and objectives.
You can have a criterion calling for maximization of your costs and another for maximization for your profits, but in this case the objective function must call for maximization or minimization for something different, for instance ‘Inspection capacity’.
If my explanation does not satisfy you for whatever reasons, just write me in RG or privately to nolmunier@yahoo.com. I will be very happy to help is I can.
• asked a question related to Algorithm Design
Question
I want to do soft-binning for a distance-based histogram. For example a point X lies somewhere between the center points of 4 bins, A,B,C,D. These four points A,B,C,D can be treated as the vertices of a square.
I would like to assign each of these vertex points a certain weightage (between 0-1) based how much the point X is closer to that vertex with two conditions:
* Sum of vertex weights should be 1.
* If X lies on the line between two vertices, only those two vertices should get the share in weightage.
Currently I calculate the weightage as:
WA =(1 - distance of A from X/sum of distances of all vertices from X )/3
Define a coordinate system so that a unit square has vertices:
P_1 = (0,0), P_2 = (1,0), P_3 = (1,1), and P_4 = (0,1).
For i = 1,2,3,4, let W_i be the weight assigned at P_i.
Let the point X have coordinates (x,y), so 0 <=  x,y <= 1.
Assume that the W_i are quadratic forms in x and y.  That is, let
W_i(x,y) = a_i*x^2 + b_i*x*y + c_i*y^2 + d_i*x + e_i*y + f_i, for i=1,2,3,4.
We determine the coefficients a_i, b_i, c_i, d_i, e_i, and f_i  such that
(1) W_1(x,0) = 1-x,  W_1(0,y) = 1-y,