Science topic

# Algorithm Design - Science topic

Explore the latest questions and answers in Algorithm Design, and find Algorithm Design experts.

Questions related to Algorithm Design

Hello everyone,

Could you recommend courses, papers, books or websites about algorithms that support missing values?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

Hi everyone.

I want optimization algorithm design of steel and concrete building.

Please help me

Thanks.

Dear friends,

Would you please tell me where I can find a dynamic list (updated constantly) of new meta-heuristic algorithms?

Thanks in advance for your guidance,

Reza

Fragmentation trees are a critical step towards elucidation of compounds from mass spectra by enabling high confidence

*de novo*identification of molecular formulas of unknown compounds (doi:10.1186/s13321-016-0116-8). Unfortunately, those algorithms suffer from long computation times making analysis of large datasets intractable. Recently, however, Fertin et al. (doi:10.1016/j.tcs.2020.11.021) highlighted additional properties of fragmentation graphs which could reduce computational times. Since their work is purely theoretical and lacks an implementation, I'm looking to partner up with someone to investigate and implement faster fragmentation tree algorithms. Could end up being a nice paper. Anyone interested?Dear Researchers,

Do researchers/universities value students/researchers having published sequences to the OEIS?

Is it possible to use Artificial Intelligence (AI) in Biological and Medical Sciences to search databases for potential candidate drugs/genes to solve global problems without first performing animal studies?

Good work by Juan, Electromagnetic aspects of ESPAR and digitally controllable scatterers with a look at low-complexity algorithm design

Hello everyone,

Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.

What kind of software or what kind of method could be used to manage the huge amount of paper, so that you could find whatever paper you have read fast.

Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.

Thanks in advance.

If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?

Any assistance or refer to scientific papers that can help me?

I need a Data Sets about Container Stacking Problem for experimentation with differents algorithms

What if I wanted to match 2 individuals based on their Likert scores in a survey?

Example: Imagine a 3 question dating app where each respondent chooses one from the following choices in response to 3 different statements about themselves:

Strongly Disagree - Disagree - Neutral - Agree - Strongly Agree

1) I like long walks on the beach.

2) I always know where I want to eat.

3) I will be 100% faithful.

Assuming both subjects answer truthfully and that the 3 questions have equal weights, What is their % match for each question and overall? How would I calculate it for the following answers?

Example Answers:

Lucy's answers:

1) Strongly Agree

2) Strongly Disagree

3) Agree

Ricky's answers:

1) Agree

2) Strongly Agree

3) Strongly Disagree

What if I want to change the weight of each question?

Thanks!

Terry

Why Particle Swarm Optimization works better for this classification problem?

Can anyone give me any strong reasons behind it?

Thanks in advance.

Let consider there is a selling factor like this:

Gender | Age | Street | Item 1 | Count 1 | Item 2 | Count 2 | ... | Item N | Count N | Total Price (Label)

Male | 22 | S1 | Milk | 2 | Bread | 5 | ... | - | - | 10 $

Female | 10 | S2 | Cofee | 1 | - | - | ... | - | - | 1 $

....

We want to predict the total price for a factor based on their buyer demographic information (like gender, age, job) and also their buying items and counts. It should be mentioned that we suppose that we don't know each item's price and also, the prices will be changed during the time (so, we although will have a date in our dataset).

Now it is the main question that how we can use this dataset that contains some transactional data (items) which their combination is not important. For example, if somebody buys item1 and item2, it is equal to other guys who buy item2 and item1. So, the values of our items columns should not have any differences for their value orders.

This dataset contains both multivariate and transactional data. My question is how can we predict the label more accurately?

Hello all, is there any MATLAB code for Adaptive Data Rate for LoRaWAN in terms of secure communication?

I was exploring federated learning algorithms and reading this paper (https://arxiv.org/pdf/1602.05629.pdf). In this paper, they have average the weights that are received from clients as attached file. In the marked part, they have considered total client samples and individual client samples. As far I have learned that federated learning has introduced to keep data on the client-side to maintain privacy. Then, how come the server will know this information? I am confused about this concept.

Any clarification?

Thanks in advance.

how to calculate number of computations and parameters of our customized deep learnig algorithm designed with MATLAB

There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.

So what are the most important ways to test the performance of smart optimization algorithms in general?

I have written a review paper. The work does not contain any experimental works but it contains a very elaborate review of the antenna design techniques - starting from the analytical models of the 1980s through the simulation-based designs of the 2000s and finally the computer-aided algorithmic design - the recent trends. The paper is rejected by several journals. The reviewers suggest that I should include some experimental results implementing some of the reviewed works and compare them. This suggestion is not feasible because I am already running out of the page limit of most journals. Further, the exact implementation of most of the techniques reviewed is not possible because the papers generally don't provide a complete picture of how the work is done. There are always some missing pieces in papers.

Can anyone suggest to me any journal or upcoming book where I can publish it?

if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?

Hello!

namely that I will implement a vehicle in node A and will then use dijkstra's algorithm to reach the shortest route. Example A is going to B, I would like to make a timer that shows me how long it takes to go from A to B. How can I implement a timer on java?

When creating & optimizing mathematical models with multivariate sensor data (i.e. 'X' matrices) to predict properties of interest (i.e. dependent variable or 'Y'), many strategies are recursively employed to reach "suitably relevant" model performance which include ::

>> preprocessing (e.g. scaling, derivatives...)

>> variable selection (e.g. penalties, optimization, distance metrics) with respect to RMSE or objective criteria

>> calibrant sampling (e.g. confidence intervals, clustering, latent space projection, optimization..)

Typically & contextually, for calibrant sampling, a

**top-down**approach is utilized, i.e., from a set of 'N' calibrants, subsets of calibrants may be added or removed depending on the "requirement" or model performance. The assumption here is that a large number of datapoints or calibrants are available to choose from (collected*a priori*).Philosophically & technically, how does the

**bottom-up**pathfinding approach for calibrant sampling or "searching for ideal calibrants" in a design space, manifest itself? This is particularly relevant in chemical & biological domains, where experimental sampling is constrained.E.g., Given smaller set of calibrants, how does one robustly approach the addition of new calibrants

*in silico*to the calibrant-space to make more "suitable" models? (simulated datapoints can then be collected experimentally for addition to calibrant-space post modelling for next iteration of modelling).:: Flow example ::

**N calibrants**-> build & compare models -> model iteration 1 ->

**addition of new calibrants (N+1)**-> build & compare models -> model iteration 2 -> so on.... ->acceptable performance ~ acceptable experimental datapoints collectable -> acceptable model performance

We have an application which converts a netlist into a schematic drawing. It works fine, except the result is extremely ugly.

We would like to know if anyone can write a place-and-route algorithm that produces an aesthetically more pleasing picture, with a view to technical cooperation on the project,

(Our software is written in 'C')

Say for arguments sake I have two or more images with different degrees of blurring.

Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?

Best regards and thanks

Some companies sell our private information (consumption habits, interests, health, etc.). There must be a way to protect yourself against this. An algorithm could simulate data in order to confuse companies.
This may have an unwanted impact.

I am working on a regression task where I am trying to predict future values of a stock/resource. At the moment, my model uses a large set of lags as input features and I want to use feature selection (FS) to select only the important lags needed by the model. However, most FS algorithms seem to be based on classification models so I am wondering whether I can create a 'proxy' classifier which uses the same input data as my regression model but whose outputs are now discretized versions of the regression outputs (i.e. simplest case 0=increase in stock, 1=decrease). Would the selected features from this proxy model serve as 'good' features for the regression model or should I only use FS algorithms designed for regression? I would be most grateful for any suggestions, particularly if they are referenced from previous research papers on the topic.

How can one evaluate a new approach to steganography the message in cover image, except the MSE and PSNR

Dear friend,

These days, I'm trying to finish my Ph.D. in electrical engineering (control engineering). I'm specialized in Extended-Kalman fitter application, fault detection, neural network, digital transportation, digital twins and machine learning respectively. It is necessary to say that, my thesis about

**industry 4.0 at pipeline performance management (software)**. I'm enthusiastic to join a team for my postdoc. And I struggle to study at the edge of science topics for my postdoc.Would you help me please to join in the on the team for my postdoc, study abroad or a way to join in this kind of program?

What are the characteristics of small objects and how to design an algorithm based on the characteristics ? To my knowledge, feature fusion and context learning are usually used to improve object detection. However, it is hard to why they improve the detection results. Are there some algorithms designed just for small object detection ?

It's no longer a surprise to realize a wide gap between advances in academic research and practicality in Industry. This discussion is about exploring this gap for a particular domain, which is

**Time-Series Forecast.**The topic has had great many research advances in recent years, since researchers have identified promises offered by**Deep Learning**(DL) architectures for this domain. Thus, as evident in recent research gatherings, researchers are racing to perfect the DL architectures for taking over the time-series forecast problems. Nevertheless, the average industry practitioner remains reliant on traditional statistical methods for understandable reasons. Probably the biggest reason of all is the ease of interpretation (i.e. interpretability) offered by traditional methods, but many other reasons are valid as well, such as: ease of training, deployment, robustness, etc. The question is: If we were to reinvent a machine learning solution solely for industrial applicability, considering the current and future industry needs, then what attributes should this solution possess?*Interpretability*,*Manipulability*,*Robustness*,*Self-maintainability*,*Inferability*,*online-updatability,*something else*?*I am doing a study on temperature compensation study on fruit dry matter prediction using NIRS spectra. As I don't know much about matlab and mostly perform my multivariate regression using Unscrambler software, I am looking for simplified version of external parameter orthogonalization algorithm.

What is the easiest software that can be used to design a medical algorithm flowchart?

Can you recommend any free software adaptable to be used with Macbook?

Hello,

I am interested in using Landsat 5-8 images to map snow and ice cover. I am trying to construct a time series showing how late into the year snow and ice cover lasts. I noticed that for Landsat ARD tiles obtained from USGS Earth Explorer there is a Pixel Quality Assessment band that accompanies surface reflectance products and that this PQA raster includes bit designations for pixels where snow or ice are present (bits 80 and 144 for Landsat 4/5). After reading more I have gathered that this PQA product is generated using the Fmask algorithm which was developed primarily for generating cloud masks. However, I decided to employ these products to see how they perform when generating fractional snow cover rasters.

I noticed that for some images very late into the year (May and June) the Fmask algorithm did classify many pixels as snow or ice, although after generating RGB composites and using the thermal infrared band to look at temperature, I determined that there was no snow or ice cover present in the image although it did look like some clouds were present. After reading more of the literature I found out that the Fmask algorithm has a tendency to sometimes classify cloud pixels as snow or ice, but I could not find an explanation as to why this happens. Is there a particular cloud type that the algorithm classifies as snow or ice, or is it unpredictable? Is there a better algorithm that is designed specifically for generating maps of fractional snow cover?

Thanks for you help,

Best,

Ryan Lennon

By changing the resistive load value used for stand-alone operation to get better realization of Id,Iq tracking by the designed controller?

The constraints include both linear constraints and nonlinear constraints. The essential issue lies in to how to deal with the nonlinear constraints.

It would be better if this algorithm can transform these nonlinear constraints into the equivalent linear ones.

Hello

I am trying to design a nature-inspired algorithm(heuristic algorithm) for robotic path planning. But I wonder if there are reference about designing heuristic algorithms and about the general steps or general form to design these sort of algorithms.

Thanks a lot in advance,

We are facing a true tsunami of “novel” metaheuristic. Are all of them useful? I referred you to Sorensen - Intl. Trans. in Op. Res. (2013) 1-16, DOI: 10.1111/itor.12001.

We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.

I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.

According to Shannon in classical information theory H(x) > f(H(x)) for an entropy H(x) over some random variable x, with an unknown function f. Is it possible that an observer that doesn’t know the function (that produces statistically random data) can take the output of the function and consider it random (entropy)? Additionally, if one were to use entropy as a measure between two states, what would that ‘measure’ be between the statistically random output and the original pool of random?

Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?

Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.

Is there a standard formula in getting the evaluation fitness in the Cuckoo Search algorithm? Or can any formula be used in evaluating the fitness?

Hope you can help me.

Hello!

Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.

My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!

As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.

Feel free to try it and comment your thoughts on it! I'm happy for any feedback.

Thank you

Michael

Hi, I have little experience with Genetic algorithm previously.

Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.

But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.

Thanks in advance.

Dear scientists,

Hi. I am working on some

**dynamic network flow**problems with**flow-dependent transit times**in**system-optimal**flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of**realistic benchmark problems**. Could you please guide me to access such benchmark problems?Thank you very much in advance.

i am developing my algorithm at weka tools. then need to compare my algorithm with basic collaborative filtering. there is Nearest Neighbor (also known as Collaborative Filtering) at weka? thanks

genetic algorithm

design optimization

wind turbine

I would like to know if there is a tool that can be used to verify the effectiveness of algorithms designed for solving games?

Any recomendations would be great.

What are the current topics of research interest in the field of Graph Theory?

There is a need to automate several industrial tasks which may require a number of humans and robots to perform it. Some can be done only using robots. Say there is a task X. My output looks like: Task X can be done if around 4 robots are assigned to it or 1 human and 1 robot are assigned to it. My input will describe the task based on which an algorithm will compute the desired output.

So basically could you share some research work where resource requirement for industrial tasks are modeled mathematically or even empirically? Or could you point to some existing algorithms in the domain of industrial engineering or otherwise where researchers have tackled the problem of identifying how much resources need to be thrown on a task to finish it successfully?

The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm?

Can you direct me to new research in this subject, or for approximate farthest point AFP?

Dear experts,

Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.

Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.

The question is that:

How could we send D units of flow from s to t through these paths in the quickest time?

Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.

I have read a paper titled with "An enhanced honey bee mating optimization algorithm for the design of side sway steel frames". In this paper, an algorithm has presented to named "enhanced honey bee mating optimization". In this algorithm, mutation operator has been used to generate broods but mutation has been done by two parents (queen and brood). Is mutation done with two parents? Image of this algorithm has been uploaded here: https://pasteboard.co/HojP5TR.jpg

Thanks

Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?

When the time deviation between the annotation of the expert and what is labeled by a peak detection algorithm exceeds the tolerance limits (see the picture), should we consider it as a False Positive or as a False Negative?

I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.

I am working towards improving CBRP algorithm. I am planning to test my algorithm using the simulator. But first I need to simulate CBRP.

So need help in simulating CBRP using ns2 or ns3 algorithm.

Map matching related paper

Map matching algorithm

Hi every body,

I conduct research in computer sciences and my research interests include:

**Distributed Systems**

**Optimal Energy Management & Strategy**

**Electricity Market & Smart Grid**

**Algorithm Design for Integrated Multi-Energy Systems**

I'm pursuing a Ph.D career concerning my research interests and skills. Is it alright if you recommend the academic job vacancies to me?

p.s. you can find attached my CV.

Thank you in advance.

Best,

Let's say I have a linear hydrocarbon molecule, like decane. For my simulation I need to feel a 3D volume with decane molecules, oriented randomly. Of course, they should not intersect.

Could someone point me to the algorithm or some way to solve this?

Given a set of

*m*(>0) trucks and a set of*k*(>=0) parcels. Each parcel has a fixed amount of payment for the trucks (may be same for all or may different for all) . The problem is to pick up the maximum number of parcels such that the profit of each truck is maximized. There may be 0 to*k*number of*parcels in the service region of a particular truck. Likewise, a parcel can located in the service region of 0 to**m*trucks. There are certain constraints as follows.1. Each truck can pick up exactly one parcel.

2. A parcel can be loaded to a truck if and only if it is located within the service region of the truck.

The possible cases are as follows

Case 1.

*m*>*k*Case 2.

*m*=*k*Case 3.

*m*<*k*As far as I know, to prove a given problem H as NP-hard, we need to give a polynomial time reduction algorithm to reduce a NP-Hard problem L to H. Therefore, I am in search of a similar NP-hard problem.

Kindly suggest some NP-hard problem which is similar to the stated problem. Thank you in advance.

Let we want to find a collision for the hash function H. Suppose for every x and t, there exists an polynomial-time algorithm that can find y such that H(x) and H(y) are different in t bits. Obviously, if the algorithm can solve the problem for t = 0, then the hash function H is not safe. Is this algorithm a threat to the hash function H, in the case that it can solve the problem for t> 0?

Suppose c=E(m,k) be the bit representation of the encrypted value of the message m with the key k. Suppose, for each t, there exist an x that has the following relation:

W(E(m,x) Xor E(m,k))=t.

In this relation, W is the weight function that represents the number of 1s. Suppose there is an algorithm that can find x for each t>1. For what values of t, this algorithm can be considered a threat to block cipher E?

I have seen random distributions being performed using already programmed algorithm, and then my question is how can it be random if it pre-decided ? Random is anything that happens naturally and cannot be predicted, imagine winning a lottery can be a quite random pick. But if they can be coded, can't we use it to , like winning a lottery ? Just a thought !!!

*The application of the bee algorithm in engineering:*

*Neural Network Learning for Patterns*

*Scheduling work for manufacturing machines*

*Information categorization*

*Optimized design of mechanical components*

*Multiple optimization*

*Fuzzy logic controllers for athlete robots*

*The application of the bee algorithm in engineering:*

*Neural Network Learning for Patterns*

*Scheduling work for manufacturing machines*

*Information categorization*

*Optimized design of mechanical components*

*Multiple optimization*

*Fuzzy logic controllers for athlete robots*

I need to find a methodology for mark essay type answers with a given marking scheme.

Hi, does anyone know if it exist an algorithm capable of changing a written line by maintaining say, number of syllables, accents, etc., but changing words in non words, so removing the meaning of the line?

Thank you

The structure of this problem is similar (not equal) to other problems that admits simple solutions. Maybe, the colleagues of this community could help me in identifying a solution to this problem.

Problem Statement:

A television manufacturer has decided to produce and sell two different types of TV sets, small and big. They assure that the small will give a profit of $300 per unit and the big a profit of $500 per unit. They have one production plant with four departments: molding, soldering, assembly and inspection. Each TV set is processed in sequence through these four departments. Each department has a limited capacity given by a maximum number of working hours per year. We assume that they can sell all the TV sets they are able to produce and the market is not a restriction.

Objective function:

•Max Z= 300x

_{1}+500x_{2}……….(1) Subject to the following constraints:

•x

_{1}+5x_{2}<=4000………………(2) [Molding capacity]•x

_{1}+x_{2}<=1200………………..(3) [Soldering capacity]•2x

_{1}+x_{2}<=2000………………(4) [Assembly capacity]•2x

_{1}+5x_{2}<=5000……………..(5) [Inspection capacity]x

_{1},x_{2}>=0……………………..(6) [Non-negativity ConstraintsThe dual is given by:

Min Z= 4000u

_{1}+1200u_{2}+2000u_{3}+5000u_{4}s.t. u

_{1}+u_{2}+2u_{3}+2u_{4}>=300 [Small TV Sets]5u

_{1}+u_{2}+u_{3}+5u_{4}>=500 [Big TV Sets]u

_{1},u_{2},u_{3},u_{4}>=0My question is: What is the physical interpretation of the dual variables or u? Is it

**cost/hr**or**profit/hr**for a specific operation?If I consider cost, then the objective function is OK. But in case of constraints, cost cannot be greater than profit. Again, if I consider profit, then the constraints are OK. But the objective function profit cannot be minimized. I have gone through many books, but still I am confused. If anybody kindly helps me, I shall be highly obliged.

I want to do soft-binning for a distance-based histogram. For example a point X lies somewhere between the center points of 4 bins, A,B,C,D. These four points A,B,C,D can be treated as the vertices of a square.

I would like to assign each of these vertex points a certain weightage (between 0-1) based how much the point X is closer to that vertex with two conditions:

* Sum of vertex weights should be 1.

* If X lies on the line between two vertices, only those two vertices should get the share in weightage.

Currently I calculate the weightage as:

W

_{A }=(1 - distance of A from X/sum of distances of all vertices from X )/3