Science topic
Heuristics - Science topic
Heuristic refers to experience-based techniques for problem solving, learning, and discovery. Where an exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution.
Questions related to Heuristics
In a lot of machine learning projects, especially with real-world messy datasets, the issues of a model’s performance are usually traced back to either the model’s architecture or the quality and structure of the input data. It is critical to note though, that in trying to assess the two factors, the sources of error might not be as easy to delineate. Improvement can come from architectural changes, hyperparameter tuning and optimization heuristics, just as much as it can come from better data preprocessing, relabeling, or reconsidering the features used for representation.
How do you go about this decision? When do you reach the point where you stop refining the model and pivoting to concentrate on the dataset? Are there empirically defined learning curves, analytical tools, or other indicators that tell you whether you’ve reached a “data ceiling” instead of a “model ceiling?” I’d like to hear about frameworks, intuitions, or concrete examples across various domains of vision, language, and sensor data that you they have found helpful.
# 206
Dear Bartłomiej Kizielewicz · Jarosław Wątróbsk · Wojciech Sałabun
I read your paper
Multi-criteria decision support system for the evaluation of UAV intelligent agricultural sensors
My comments:
1- Abstract: You say “The results confirm the framework’s effectiveness demonstrating its robustness and stability in decision-making. Sensitivity analysis and comparative studies further highlight its reliability, particularly in addressing rank reversal issues commonly found in existing MCDA methods such as TOPSIS and AHP.”
I am afraid that you are mistaken, for SA is not able to prove effectiveness of a framework or highlight a method or procedure. It is designed to find if the solution found, that is the selected best alternative, is strong and stable
SA is not related to RR, which, by the way, as per my research, may happen in all MCDM methods, because it does not depend of a method, but on the different topologies that are generated when spatial dimensions or alternatives are added or deleted.
2- Very good and precise information in the abstract clarifying a subject that is unknown, for many of us, or at least for me.
3- In page 7 you mention MEREC for criteria evaluation. It is indeed independent of subjectivity, but in my opinion, it is a biased method, because in each run it is solving different problems since its eliminates progressively a criterion, that is, if there are, say 9 criteria, in each run it considers only 8, and each one on a different matrix
In the next step you compare rankings using different MCDM methods, and what is the gain in doing that?
None for me, since a high correlation between the rankings of two methods, only denotes that both move in the same direction, but it does not mean that they are close to reality
Selecting weights to evaluate alternatives, is incorrect, because what really has the capacity to evaluate alternatives is the discrimination of values within a criterion, not values between criteria, or in other words, what is relevant is the content not the envelope
4- Page 11-” The criterion weights determine their relevance, which is crucial in evaluating alternatives”.
This assertion is not supported by any mathematical theorem, axiom or common sense it is simply intuitive.
My justification of my assertion is as follows:
Iin MCDM the DM is working with lineal equations represented by straight lines in a plane that in different manners, according to the method, define a space of solutions, where one of them is preferred, as in TOPSIS, where the best solution is that closest to the ideal point.
When throe DM multiplies the original values in the initial matrix by a weight, these values increase or decrease proportionally, that is, there is no relative change within each criterion, since the line displaces parallel to itself. However, what changes is the position in the plane of a criterion related to others, due to the various weights, and thus, displacing differently, and varying the original distances between them. This may produce a topological change in the common space of solutions, and now the alternative closest to the ideal point in TOPSIS, may have changed, which produces a different ranking. It can be seen that weights only modify the original distancers between criteria. This is geometry, not evaluation. You may or not agree with my explanation, but it is rational and mathematical, not intuitive. This can produce that the enplane of this criterion related to the other that also change.
Regarding the formation of solution spaces, I am reproducing what AI says about it:
“In Multi-Criteria Decision-Making (MCDM) methods, solution spaces are defined based on the criteria and preferences involved in the decision-making process. Here are some common approaches:
1. Weighted Sum Model (WSM): The solution space is defined by assigning weights to each criterion and calculating the weighted sum for each alternative. The alternative with the highest score is chosen.
2. Analytic Hierarchy Process (AHP): The solution space is structured hierarchically, where criteria and sub-criteria are compared pairwise to determine their relative importance.
NM- That is, the alternative with the highest score is chosen
3. TOPSIS (Technique for Order Preference by Similarity to Ideal Solution): The solution space is defined by identifying the ideal and anti-ideal solutions. Alternatives are ranked based on their distance from these solutions.
4. PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation): The solution space is defined by preference functions that compare alternatives based on criteria.
NM- The best alt. is selected according to compliance of preference functions
5. CoCoSo (Combined Compromise Solution): This method uses a compromise solution space where alternatives are ranked based on aggregated measures”
NM – That is, it uses a aggregating multiplication rule
NM – Linear Programming (LP)
The best alternative is which best satisfy all criteria, and it is defined by the vertex of a polygon than tangents the main objective line
As you can see, all methods generate solution spaces, and from there the best solution is extracted by different means, not always following linearity, except LP, which is altered by different procedures. This could explain why a same problem, solved by different methods, give different rankings.
5- Page 11 “For the STD, Entropy, and CRITIC approaches, one can see a large difference between the most significant and less significant weights. In contrast, for the MEREC”
This is obvious, because STD, Entropy and CRITIC, refer to the solution of the same problem, when MEREC solves different problems by eliminating criteria, consequently, there are different matrices each time
Table 3 in page 12, shows that Entropy, and CRITIC, both scientific methods show high similarity, because both work with discrimination. In my opinion. CRITIC is more complete because it determines the STD or discrimination and besides, the correlation between criteria, that is, takes into consideration redundant information in two criteria
6- “The similarity coefficient of the WS rankings was used to test the similarity of the obtained rankings”
And what this information is good for?
7- Page 17 “The phenomenon of rank reversal occurs when the relative ordering of alternatives changes unexpectedly after the addition or removal of an alternative, raising concerns about the stability and reliability of the decision-making process”
Youa are right about unexpected changes, and it is due to the random nature of RR; that is, it may or may not occur, and this is unpredictable, because it depends of the characterises of the
vector added and its intersections with the vectors of the precedent matrix
“To address this limitation, we propose a novel approach-COCOCOMET-which is resistant to rank reversal.”
I doubt about this. You can prove perhaps what you say, by try to successively increase the number of alternatives, and probably, you will find that say for instance 2, 3 and 4 alternatives are indeed ranking invariance, but may change if you add alternative 4. Why? Because each time you add an alternative to a matrix, you increase the amount of information, therefore, a newer alternative does not consider the old ones because their information is already included in the result for the alternative added
As an example, think on a square, you get information about two dimensions (2D); add a new dimension, and you will get a cube or 3D, that already has the information that gave you the square. RR is produced due to different topologies that appear with a new addition, and consequently, no MCDM method can escape of this. I did this exercise expanding from 2D to 10D, and it happens as commented.
8- Page 17. “Generating a ranking of alternatives”
Since when criteria define alternatives? How can you select criteria if you don’t know to what alternatives are going to evaluate? Of course, this approach is equivalent of ‘putting the cart before the horse’
9- Page 20 “Future research should focus on exploring optimization strategies or heuristic approaches to ensure the framework remains efficient for large-scale decision-making problems”
This is interesting. How do you evaluate efficiency? For me, efficiency can be computed by determining in what percentage a criterion is achieved. Remember that a criterion is an objective, therefore, you need to establish a target, a goal to achieve. This can be easily and mathematically done using Linear Programming, that works with targets, but I do not imagine how you can do that with the more than 200 MCDM existing methods. None of these methods, except PROMETHEE, LP and SIMUS consider resources. Remember that criteria forcefully rely on resources, such as money, manpower, water, contamination allowances, etc., and that the purpose of MCDM is to select an alternative subject to a set of criteria that optimizes the use resources. Consequently, MCDM also can be defined as electing alternatives as those that make the best use of available resources.
10- Page 20 “The findings indicate that the proposed framework effectively identifies optimal UAV sensors, providing a structured and adaptable approach for agricultural applications”
How do you know that they are optimal? Let me remind you that in MCDM optimality is a myth, since you cannot ask at the same time for the maximum benefit and the minimum cost. You must be looking for a compromise solution, a balance.
11- Page 21 “Furthermore, the study highlights the importance of incorporating multiple evaluation techniques to achieve more reliable and consistent results. Sensitivity analysis and comparative evaluations demonstrate that the proposed model maintains its effectiveness across various weighting scenarios, reinforcing its practical applicability”
You can say that if you wish, but on what grounds? Who says that multiple evaluation achieves more reliable results? Where is the demonstration of this? In my opinion they are only assumptions without any mathematical support
Again, what is effectiveness? You never defined it
These are my comments. Hope they can be useful
Nolberto Munier
Heuristically, I suppose people consider films more successful the greater the return on investment(ROI). Stimuli: ”Successful films are often based on classics due to relatability and interest”(Ohnemus 2023). “My film career was and is perhaps successful despite my lack of technical skills. Arts have both applications and fundamentals”(Ohnemus 2023). Source:
Preprint Education for an Automated World
Life is predictable enough to form survival heuristics yet, too unpredictable for specific absolutes(GENERAL ONES EXIST). Source(my most recent and most cumulative work):
Hi!
I'm working on a phylogenetic inference (molecular) with 205 taxa and 5350 characters (7 different genes).
I've ever made a phylogeny thanks to a supermatrix. There were some polytomies. The problem is that some have lacks of sequences. Thus, I'd like to make a supertree to compare and see if there will be polytomies again or not.
This way, I inferred trees for each genes in ML with IQtree2. Then, I used Clann to make a matrix as a MRP (Matrix Representation with Parsimony) with 7 source trees. Next, I used PAUP to start a heuristic search (in parsimony) with these command lines in my nexus file (as Clann suggested) :
begin paup;
set increase=auto notifybeep=no errorbeep=no;
hs nreps=10 swap=tbr addseq=random;
showtrees;
savetrees FILE=MRP.tree Format=nexus treeWts=yes Append=no replace=yes;
quit;
end;
However, the search is working for hours (since 8:00 pm, yesterday) and it doesn't stop. More than 10 billion rearrangement were tried 1 721 900 trees are already saved, whereas it's only the first replicate. The analysis tells that the best tree is the tree n°3088, but the heuristic search continues.
Regarding the number of taxa and characters, is it normal that it take so much time?
Is there an error in my command lines?
It is the first time I try to build a supertree.
Can you help me?
Compared to policy-based Deep RL algorithms, does this category of algorithms have a lower exploration efficiency ?
I found this information in the following article:
"Optimal energy management strategies for energy internet via deep reinforcement learning approach, Hua et al, 2019" but it didn't cite a source. Is it common knowledge in this expertise ?
Thank you !
The unwritten rule is "don't look suspicious. If people do look suspicious then they either get destroyed or subvert enough TO survive."
Sources:
Could you tell me the difference between Heuristic & Metaheuristic algorithms ?
Some references say that, the first used to refer to classical optimization algorithms and the second used to refer to modern optimization algorithms?
So I want to know if there is a wide difference between them..
How useful is the heuristic that if both sides of a debate are unfalsifiable then they may be a false dichotomy? My answer: The heuristic that is both sides of a debate are unfalsifiable then they may be a false dichotomy is very useful because it is probably the case for practical reasons. Examples include but may not be limited to (evolutionism or creationism), (freewill or determinism), (rationalism or empiricism).
How to tune low pass filter parameters through heuristic optimization techniques?
Hi,
What all parameters should one test for when it comes to sound? Is there any heuristics available?
Thanks
I am sharing with you a list of articles that I normally use when supervising my master (and bachelor but mainly master) students. I hope you will find them useful, and I welcome your feedback (i) in order to improve them, (ii) to have new ideas and (iii) anything else that you would like to share.
I tried to keep these articles fairly general, but my perspective come from supervising students in Computer Science/Engineering, Data Science, Information Management and Software Engineering. Therefore, apologies if they may sound alien to your discipline, however, if that is the case please let me know. Over the years they have grown in quantity, and I categorized them in 2 groups:
- How to do a better thesis: articles that clarify various aspects of the development of a (master) thesis. Proposal development, ideas, related work, methodology, writing etc...
- How to become a better programmer: articles that helps a person familiar with scripting programming (basic python for example) understanding the basic of object-oriented programming and how professional tools (like SDK) woks. Again, the specific focus is improving the quality of the code that a master student needs to write. Probably if you are following a hard-core programming master these articles will be fairly simple.
What follows are the one of the “better thesis” section:
Attitude mindset and lifestyle
- Take a moment to reflect to right approach for the challenge ahead. https://francescolelli.info/thesis/the-right-attitude-for-your-thesis-preparing-yourself-for-the-challenge/
- Mens sana in corpore sano: Take care of yourself, in particular do not neglect of your physical health https://francescolelli.info/thesis/simple-rules-for-taking-care-of-yourself-during-before-and-after-your-thesis/
How to do a good thesis: before you start.
- Start from considering these tips for improving the quality of your proposal. They will help you in understanding how to think scientifically including if you do not have to write a research proposal. https://francescolelli.info/thesis/how-to-write-a-thesis-proposal-or-a-research-proposal-a-few-tips/
- Check if you are aware of all the players around you thesis and what is their interest https://francescolelli.info/thesis/the-players-around-your-thesis-who-is-going-to-help-you/
- Understand how adopting good scientific practices can improve your grade. https://francescolelli.info/thesis/adopting-good-scientific-practices-increases-your-visibility-and-the-grade-of-your-thesis/
How to do a good thesis: the openings moves.
- Ask yourself what you want to do when you will “grow up”. This article will help you understanding how you can take the most from your thesis for your future goals. https://francescolelli.info/thesis/take-the-most-from-your-thesis/
- (Optional) Get a grasp of what kind of mentor I am. It will help you in understanding what I write in these posts and/or if we are compatible in case you are considering pursuing your thesis with me. https://francescolelli.info/thesis/mentor-for-your-thesis/
- Set up the proper communication tools with your supervisor, so that you will have a better quality time with him/her. https://francescolelli.info/thesis/setting-up-the-proper-communication-tools-with-your-thesis-supervisor/
A Note on Writing:
- Writing a scientific endeavor has its rules and best practices https://francescolelli.info/tutorial/on-scientific-writing-classic-postmodern-and-self-conscious-style/
- <work in progress>
How to do a good thesis: literature research and related work
- Look at these heuristics for understanding if a paper is worth reading or you should move forward to the next one. https://francescolelli.info/thesis/6-heuristics-for-assessing-the-quality-of-a-publication/
- Understand how to select good venues (conferences or journals) where you can search for good publication. https://francescolelli.info/thesis/how-scientific-venues-work-an-heuristic/
- Learn how to read a scientific paper faster and more effectively. https://francescolelli.info/thesis/read-scientific-papers-quickly-and-effectively/
- Master the right features in MS-Word for handling the related work and managing the growing complexity of the task. https://francescolelli.info/thesis/how-to-use-references-in-word-a-few-tips-and-suggestions-for-your-thesis/
- Get more insights about related work, literature review and survey papers. https://francescolelli.info/tutorial/related-work-literature-review-survey-paper-a-collection-of-resources/
How to do a good thesis: the experimental and scientific part
- If you feel stuck: get an idea on “how to warm up your research engine” and do your first step. https://francescolelli.info/thesis/warming-up-the-research-engine/
- Get some inspiration from the work of other scientist and learn how to properly categorize the literature review. https://francescolelli.info/thesis/how-to-use-the-literature-review-for-your-research
- Familiarize with sources that can provide Data for your (master or bachelor) thesis. https://francescolelli.info/thesis/where-to-get-data-a-collection-of-resources-for-your-thesis/
- If you plan to write some programming code there are several free resources that can help you. https://francescolelli.info/programming/free-resources-that-will-warm-up-your-programming-environment/
- If you plan to write some programming code, get familiar with these best practices. https://francescolelli.info/how-to-be-a-better-programmer-the-mini-guide/
- If you plan to use a survey for scientific research you may want to consider these tips and suggestions. https://francescolelli.info/thesis/get-the-basics-on-doing-a-survey-for-scientific-research-purposes/
- Do this simple feasibility check if you plan to use an interview approach in your case study research https://francescolelli.info/thesis/should-you-use-a-case-study-for-your-thesis-in-information-management/
- <work in progress>
How to do a good thesis: the last mile
- Did you produced the first final draft of the thesis? Here you can find a simple set of rules and a checklist that can help you. https://francescolelli.info/thesis/simple-writing-rules-that-can-improve-the-quality-of-your-thesis/
- Are you close to finishing the thesis? Put your current draft to a (stress) test. https://francescolelli.info/thesis/the-navigation-test-put-your-thesis-to-a-stress-test/
The End Game
- Deal with the submission of your thesis and its defense in the proper way https://francescolelli.info/thesis/commencing-the-end-game-last-minute-issues-and-recommendations/
- Understand what is Open Access and how you can make the most of it https://francescolelli.info/thesis/should-you-release-your-thesis-open-access/
- Consider the benefit (and the extra work) of publish your thesis. Is it worth it? https://francescolelli.info/thesis/should-i-publish-my-thesis-the-good-the-bad-the-ugly/
- Now that your thesis has been submitted is about preparing a killer presentation for the defense! https://francescolelli.info/thesis/the-art-and-the-skill-of-speaking-and-making-a-presentation
The End of the Journey
- Publish your thesis using the University Library. It will take less then one hour and will ensure some extra visibility to your work. https://francescolelli.info/thesis/publish-your-thesis-in-your-university-library/
- Learn what the future of your thesis could be https://francescolelli.info/thesis/what-will-happen-to-your-thesis-after-your-graduation/
Thanks for taking the time to read such a long discussion! Based on your experience, is there anything missing or that require some improvement? Drop me a line, I will be happy to hear from you
Francesco
Qm is the ultimate realists utilization of the powerful differential equations, because the integer options and necessities of solutions correspond to nature's quanta.
The same can be said for GR whose differential manifolds, an sdvanced concept or hranch in mathematics, have a realistic implementation in nature compatible motional geodesics.
1 century later,so new such feats have been possible, making one to think if the limit of heuristic mathematical supplementation in powerful ways towards realist results in physics in reached.
I am interested in any articles that have heuristics in the title.
I am the author of Programmable Heuristics:
I have been wondering if there may be methods of applying my heuristics to commercial applications like games, or improving the web.
Does anyone have experience with something mildly related, even if it's just using HTML L-frames or imbedded excel sheets?
I would like to see heuristics applied in the real world, but I have seen few examples of that.
My sense is it could be mathematical enough that it could simply organize text or determine outputs fairly easily, as it is designed to be simply organized.
Thanks for your help!
I'm reading a paper and I couldn't understand how to read exactly this plot. in the paper they say that it shows the belief distributions that result from using weighted A* with a weight of 2 and LSS-LRTA* for the sampling. the generated beliefs are very similar, with only minor differences for large heuristic values where fewer samples have been observed.
can I know the name of this kind of plots too?
thank you

I am interested in the use of Extreme Value Theory (EVT) to estimate global optima of optimization problems (using heuristic and metaheuristic algorithms), however, it is a bit difficult to find them since the use of EVT is not usually the main objective of the studies. Could you help me by sharing articles where this procedure is used? Thank you in advance.
I wonder if there are some advice more senior researchers here can share on how to identify interesting topics that are likely to interest reviewers and editors particularly in a hermeneutic social science approach.
Your inputs will be highly appreciated. Thank you
Having come to realize the limitations that metaheuristics have by dint of the NFL theorem, I came across this interesting field of hyper-heuristics (heuristics searching for heuristics) and read a couple of papers on the topic. I was wondering whether any of you can give me a list of recommended books for further learning. Online video courses will also be greatly helpful. Thanks in advance.
I have already gone in deep with the GP initialization method and I found that there are some traditional methods that ensure the diversity in the population on the initialization phase of the GP process like RHH, Grow, Full ..etc. my question is if there some other method that ensures the same purpose with those ones or if there some hybridization with other heuristics?
As we know, heuristic algorithms are effective way to search substitution-box (S-box) which has high nonlinearity. Lots of nonlinearity calculations of S-box are needed in these process which make the speed of nonlinearity calculation quite important. So, what is the approximate minimun time to calculate the nonlinearity of an 8x8 S-box (On Intel Core i7 CPU for example)? And what is the key points in programming?
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
Sometimes I have found an inconsistency gives a helpful clue of how to improve a theoretical investigation. Early on I viewed mistakes as hurdles. I still think they are hurdles but have many times found them to be helpful. My view is that it encourages persistence to know that mistakes are part of the process of figuring things out. Are there articles about the role of making mistakes in theoretical physics?
Everyone knows that optimization problems can be solved by mathematical programming techniques, whether they are (linear - non-linear - mixture - ...) and also can be solved by heuristic techniques. Now which are better, mathematical programming techniques or metaheuristic techniques?
Recently I come across 2 article bearing uncanny similarities.
After some investigation, I suspect that this article:
might have been plagiarized from this one:
My question is: Should the below findings suggest any suspicion that one of these articles might have been plagiarized?
The percentage of similarity is not very high, around 150 - 200 words over 14.000 words. But there are long phrases (sometimes as long as 16 consecutive words) appearing in both articles without any quotation marks. There were neither acknowledgement nor citation of each other.
Below are some of the similarities that I found:
Nahon (2008) Abstract: Gatekeeping theories have been a popular heuristic for describing information control for years, but none have attained a full theoretical status in the context of networks.
Mitchell et al. (1997) Abstract: Stakeholder theory has been a popular heuristic for describing the management environment for years, but it has not attained full theoretical status.
Nahon (2008), p. 1501: First, each attribute is a variable, not a steady state, and can change for any particular relationship among gatekeepers or during gatekeeper–gated relationships. p. 1501
Mitchell et al. (1997) p. 868: First, each attribute is a variable, not a steady state, and can change for any particular entity or stakeholder-manager relationship.
Nahon (2008), p. 1493: Salience refers to the degree to which gatekeepers give priority to competing gated claims.
Mitchell et al. (1997), p. 854: stakeholder salience - the degree to which managers give priority to competing stakeholder claims
Nahon (2008), p. 1493: However, as popular as the term has become and as richly descriptive as it is, there is little agreement among the different fields on its meaning and a lack of full theoretical status.
Mitchell et al. (1997), p. 853: Yet, as popular as the term has become and as richly descriptive as it is, there is no agreement on what Freeman (1994) calls "The Principle of Who or What Really Counts."
Nahon (2008), p. 1506: While static maps of gatekeepers are heuristically useful if the intent is to raise consciousness about “who or what really counts” or to specify a stakeholder configuration at a particular context and time, one should remember that this is a simplification of reality.
Mitchell et al. (1997), p. 879: Static maps of a firm's stakeholder environment are heuristically useful if the intent is to raise consciousness about "Who or What Really Counts" to managers or to specify the stakeholder configuration at a particular time point.
I tried to contact one author and they replied that the other article had been "an inspiration" for them and admit that they recycled the overall structure of the other article. Plausibly, they denied any allegations of plagiarism.
Being inexperienced in detecting plagiarism, I am uncertain whether this is any serious violation or academic miscondct.
So, again, I would like to ask:
1. Should my findings suggest any suspicion that one of these articles might have been plagiarized?
2. If the answer is "Yes", what should I do?
Any kind advice would be much appreciated.
If I have mistaken, I would like to send my apologies to the authors of both articles and those who help enlighten my mind.
Attached to this discussion is an excel file detailing the similarities that I found.
I want to ask the question regarding the approach to solving the combinatorial optimization problem (COP). Based on my reading, some of the researchers proposed an Exact approach to solve the COP rather than a Heuristic approach. As known, the exact approach may not suitable to solve real-world COP on a large scale due to the computational time to provide the solution. But the Heuristic approach can provide the solution with the relational computational time near to the optimal solution. My question why the Exact approach still becomes the choice for some of the researchers rather than directly using the Heuristic approach? Thank you.
why we use meta heuristic optimization algorithms to solve multi-level image segmentation, however the machine learning and deep learning can perform?
I'm looking for a heuristic algorithm to solve facility location problem
I am looking to develop an overview/survey of specific experimental techniques and papers in which exploration is defined, measured, and analyzed as part of heuristic search (preferably for continuous domains).
Suggestions and references very much appreciated.
Hello
that is, so that I would like to implement a network that consists of 16 nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.
How can I proceed? Can anyone help me how I can implement a network and combine the heuristics in it using matlab or java.
I would like to implement a network that consists of few nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.
How can i code it ? Need a code for implement a network and combine the heuristics in, using matlab.
I approximately found a code below that matches my problem (see figur ) but the code counts the nearest neighbor directly but I want to divide the task myself and then it will count the nearest neighbor
Multi-Objective Particle Swarm Optimization (MOPSO) method is known as a heuristic optimization technique thatThe multi-Objective does not guarantee to reach global optimality. Why the algorithm is convenient for most of the targeted applications? Are they other potential solution approaches? Is it conceivable to use standard optimization solvers like, e.g., CPLEX. The multi-Objective
Need heuristic for assignment problem. Use it in order to allocation tasks to 2 or more vehicle. So it can work on the same network. The heuristic should be easy to implement for exempel not GA.
NOTE the allocation of the task can be for exmpel vehicle 1 pick a goods from nod A to B and vehicle 2 pick from C to D.
Some people are not impressed by the development of intuitive near-optimal closed-form solutions to some business problems because the exact optimal solutions can be obtained using a spreadsheet solver. The objective functions do not lead to exact closed-form optimal solutions. The approximate closed-form optimal solutions are very intuitive from a business perspective. My argument is that Little's Law is used to estimate the average WIP levels when you know the average throughput rate and the average cycle time, and it is applied in many different contexts. Of course, you can model all of the complexities of the shop floor and make this calculation more accurate. Aren't we better off if we can come up with some simple and intuitive equations that fit many business scenarios? Solving to exact optimum is in fact not reliable either, because the parameters are not quite precise in the first place.
I am attempting to design a membrane separation unit to separate a gas feed of approximately 94.1 mol% of hydrogen but I am having trouble finding performance equations/sizing parameters and heuristics which could be used to do so. Can anybody recommend any textooks or reports to help with this? If it helps the stream also contains carbon monoxide and dioxide, nitrogen and methane.
Hello, I am hoping to use Heuristic Inquiry to explore lived (living) experiences of educators and online networks and would love to connect with other Researchers who have used this methodology and might be able to share some hints and tips about things you have learnt along the way ? A lot of the research I have been reading stresses that it is really difficult and not for everyone so I am hoping to find people who would recommend it, and the transformative journey that they have been part of ?
This question relates to my recently posted question: What are the best proofs (derivations) of Stefan’s Law?
Stefan’s Law is E is proportional to T^4.
The standard derivation includes use of the concepts of entropy and temperature, and use of calculus.
Suppose we consider counting numbers and, in geometry, triangles, as level 1 concepts, simple and in a sense fundamental. Entropy and temperature are concepts built up from simpler ideas which historically took time to develop. Clausius’s derivation of entropy is itself complex.
The derivation of entropy in Clausius’s text, The Mechanical Theory of Heat (1867) is in the Fourth Memoir which begins at page 111 and concludes at page 135.
Why does the power relationship E proportional to T^4 need to use the concept of entropy, let alone other level 3 concepts, which takes Clausius 24 pages to develop in his aforementioned text book?
Does this reasoning validly suggest that the standard derivation of Stefan’s Law, as in Planck’s text The Theory of Heat Radiation (Masius translation) is not a minimally complex derivation?
In principle, is the standard derivation too complicated?
I am looking for advice concerning a (supposedly) known practical issue : article overloads. While doing my PhD I was convinced that everything who went through publication was worth reading and understanding. My opinion as evolved since then for very practical consideration : lack of time to read biblio and absolute necessity to "pre-screen" something before deciding if it's worth reading or not.
Concerning scientific paper, the prescreening can be tricky. Since the format is very standardized as well as the wording (nothings sounds more like a paper than a paper), I often end up reading half a dozen page on a paper, annotates parts, spend time... before deciding I shouldn't spend time on it.
Do you have some "tricks" to share in order to lower that waste of time? While these "tricks" might be completely non-scientific of course, I still would enjoy them
Heuristics reduces the computation time of creating clusters from a set of data points. But, selecting the right heuristic algorithm with fine-tuning is a challenging task. I want to know what are suitable meta-heuristic algorithms available for good performance in cluster building.
Hi, I am working on a research paper in which I want to compare the performance of several (meta)heuristics (including GA) in solving a certain problem. I have run each algorithm several times and found out that my GA is not able to find the good solution that other (meta)heuristics find in a short time. It converges to a solution which I know is not the best (because other algorithms converge to a way better solution. I have increased the mutation rate to 0.2 in order to avoid getting trapped in a local optima and my crossover rate is 0.9.
I want to have an acceptable comparison/evaluation of the performance of these algorithms, So
my question is: Is there a problem with my GA or can I simply report the GA solution and explain that it performs poorly?
I have some optimal solutions of a discret space and I want to apply an heuristic search using those solutions as attractors. I started using distances as cost functions but I don't know if it's a good approach.
I am programming a scheduling system using simulated annealing and I want to know if this heuristic is suitable?
In recent years, many new heuristic algorithms are proposed in the community. However, it seems that they are already following a similar concept and they have similar benefits and drawbacks. Also, for large scale problems, with higher computational cost (real-world problems), it would be inefficient to use an evolutionary algorithm. These algorithms present different designs in single runs. So they look to be unreliable. Besides, heuristics have no mathematical background.
I think that the hybridization of mathematical algorithms and heuristics will help to handle real-world problems. They may be effective in cases in which the analytical gradient is unavailable and the finite difference is the only way to take the gradients (the gradient information may contain noise due to simulation error). So we can benefit from gradient information, while having a global search in the design domain.
There are some hybrid papers in the state-of-the-art. However, some people think that hybridization is the loss of the benefits of both methods. What do you think? Can it be beneficial? Should we improve heuristics with mathematics?
I'm interested in the phenomenological method/paradigm, but have so far not found any papers or projects concerning their utility in interventions. Are heuristics such as Moustakas simply not applicable in the therapeutic setting or am I merely too inexperienced to find the right sources?
Instead of manual tuning of algorithm's parameters, it is recommended to utilize automatic algorithm configuration software. Mostly beacuse it was shown that they increase manyfold the algorithm's perfomance. However, there are some differences among the proposed configuration software and beside those listed in (Eiben, Smit, 2011) it is important to gather experiances from the researchers. I would like to hear how does one decide on the stopping criteria, or values for parameters, for heuristic steps within the stochastic algorithm... there are so many questions.
I just heard of the terminology "black box optimization". I am a little confused about what does it mean! as the name suggests and as I learned is that you are trying to design an algorithm that optimizes an objective function but the algorithm doesn't know (or allowed to use) any prior knowledge about the structure of the function?
So what is not allowed in blackbox optimization:
Using any information derived from the analytical expression to adjust the algorithm?
(So if I know that a given function is multimodal and I know it's global minimum beforehand and I'm using a heuristic algorithm so I'm not allowed to adjust the parameters in a certain way that I know it works for this class of functions. Is this correct?
If this is true, then what is the point of black box optimization?
The choice of something to ruin can be an implicit choice as to what should be preserved. A heuristic for preservation can thus lead to a heuristic for ruin. I've had what I think is a very interesting result for what to preserve (common solution components) in the context of genetic crossover operators that use constructive (as opposed to iterative) heuristics. I tried to share it with the Ruin and Recreate community with no success.
I guess my real question is -- How should I Ruin and Recreate this research to make it more relevant to Ruin and Recreate researchers?
Conference Paper The GENIE is out! (Who needs fitness to evolve?)
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
Thanks.
Dear fellow researchers,
I need a two to three non indian reviewers for the research area of Scheduling-optimization-meta heuristics-operation research. all the journals are asking for other nationality reviewers, since i dont know anyone can somebody please volunteeer to be my reviewer?
thanks in advance.
I have programmed several heuristic algorithms in my Phd thesis.
The last algorithm gave me very good results as an objective function and even in runtime compared to other algorithms done before. Is there a formula to calculate the gain and how to interpret it? thanks in advaced
hi
I have designed a meta-heuristic algorithm and I used Taguchi Method on a small example should I repeat these experiments for each problem or that's enough because for my small example I can only create 38 neighbor solutions but for my bigger problem I can make 77 neighbor solutions and I think it's important that how many neighbor solutions I can Make & how many neighbor solutions I want to create?
PS: the only difference between the two problems is their size.
what is difference between heuristic and meta-heuristic algorithms. How can we say a algorithm whether it is heuristic or meta-heuristic algorithm? Thank you in advance.
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....
I am preparing a comparison between a couple of metaheuristics, but I would like to hear some points of view on how to measure an algorithm's efficiency. I have thought of using some standard test functions and comparing the convergence time and the value of the evaluated objective function. However, any comments are welcome, and appreciated.
How different by giving its global optimum?
As you may be knowing that there are different mathematical tools and techniques which we can combine or hybridize with heuristic techniques to solve their entrapment in local minima and convergence issues. I know two techniques namely Chaos theory and Levy distribution as I have used them for increasing convergence speed of Gravitational Search Algorithm (GSA). So, my question is: can you name and briefly explain other mathematical techniques which we can combine with optimization algorithms in order to make them fit for solving complex real world problems.
Thank you.
Please i need recommnedation on texts or literature that can improve my knowledge and skills on tuning of control systems ranging from sliding mode, LQR/LQG and others. I alwys have problem at this stage after rigor of modeling.
Most of control design problem involves tuning heuristically. In my opinion, this is randomness that doesnt have strategies. Even PID control with popular Ziegler Nichols still involve randomness!
there should be a way to know the range of tuning.
I am trying to understand whether the PERMA theory is a good theory. Can the theory be generalized? Can the theory produce solutions to real life problems?
Hi,
I've recently read that the use of random keys in RKGA (Encoding phase) is useful for problems that require permutations of the integers and for which traditional one- or two-point crossover presents feasibility problems.
For example: Consider a 5-node TSP instance. Traditional GA encodings of TSP solutions consist of a stream of integers representing the order in which nodes are to be visited by the tour.1 But one-point crossover, for example, may result in children with some nodes visited more than once and others not visited at all.
My question is: if we don’t have a feasibility problems and our solutions are all feasible solutions so in this case is it correct to apply RKGA?
According to French 2001, Decision models can be used in the descriptive, normative, or prescriptive analysis. While there is a lot of research performed on normative models (neoclassical) and descriptive (behavioral economics mostly). when researching the various database I can see that prescriptive literature is really thin. I am therefore asking the community if there is any peer-reviewed prescriptive model article for real estate investment to recommend?
Hello scientists,
I'm looking for a detailed comparison between Routing Machines (how i call them).
Somewhat like a state-of-the-art, survey or tabular comparisons between different alternatives for offline point to point routing frameworks (like Graphhopper or OpenStreetMapRoutingMachine)
Could you point me to some documents where I can research the following information:
- which map material is the framework working on (not neccessarily OpenStreetMap Data)
- is the framework able to consider traffic data provided by me
- is it possible to calculate the fastest route by time
- does the framework provide the functionallity to calculate a route with many stops
- if yes, how many
- which routing heuristic is used
- does the routing heuristic consider given time-windows for stops
- and how long does it take in average to route several scenarios
- what information does the frameworks routing functions provide as output (step by step instructions, polyline, ...)
- do i have to pay for the framework
- if yes, how much
Thank you very much,
Richard
I'm working on a helmet-impact test in which when I'm doing front impact a warning is coming out as warpage angle and violation of heuristic criterion.
Dear all,
I have developed a mathematical model ( convex mixed-integer nonlinear programming) in which there is only one nonlinear constraint (which is not quadratic). What is the best method in order to tackle this problem? Thanks
Hi,
I just want to make sure that I understand the mechanics of the NSGA-II (the non-dominating sorting genetic algorithm) for a multiobjective optimization problem, since the resources I have (I am not satisfied with, I would be grateful if anyone can recommend me a good paper or source to read more about the NSGA-II)
Here is what I got so far:
1- We start with a random population lets call P_0 of N individuals
2- Generate an off-spring population Q_0 of size N from P_0 by using binary tournament selection, crossover and mutation)
3- Let R_0=P_0 U Q_0
While (itr<= maxitr) do,
5- Identify the non-dominating fronts in R_0, (F_1,..F_j)
6- create P_1 (of size N) as follows:
for i=1:j
if |P_1| + |F_i| <= N
set P_1=P_1 U F_i
else,
add the least crowded N - |P_1| solutions from F_i to P_1
end
end
7- set P_1=P_0;
8- generate an off-spring Q_0 from P_0 and set R_0=Q_0 U P_0
9- itr=itr+1;
end(do)
My question (assuming the previous algorithm is correct, how do I generate Q_0 from P_0 in step 8?
Do I choose randomly any 2 solutions from P_0 and mate them or do I choose according to or is it better to select the parents according to some condition like those who have the highest rank should mate?
Also, if you can leave me some well-written papers on NSGA-II I would be grateful.
Thanks
Hell, everyone. I am a student of electrical engineering and my research field is related to the optimization of a power system.
I know that the algorithm that we should choose depends on our problem but there are lots of heuristics, metaheuristic algorithms available to choose from. It will also take some time to understand a specific algorithm and after that maybe we came to know that the chosen algorithm was not the best for my problem. So as per my problem how can I choose the best algorithm?
Is there any simple solution available that can save my time as well?
Thank you for your precious time.
job shop scheduling problem using dynamic programming
I am a theoretical physicist and I sometimes use Mathematica to algebraically manipulate large equations. I though use it heuristically and I know a lot of researchers use Mathematica for symbolic computation.
What are the best ways to learn it.
Are there any books or any online course to understand it
What are good practices.
Do you think it is neccery to have software that contains meta heuristic algortms like GA,SA,...
in a package that calculates different modified problems ?
Software for developing heuristics in time-scheduled network
Recently, I have just started learning basic mineralogy and I found that it is quite difficult to master. It would help me a lot if you can share some tips on this field of study so that I can easily identify and describe accurately the minerals that is being observed using plane polarised light and cross polarised light.
What is the best/ next step if you suspect the evolutionary algorithm recombines and mutates in an inferior way, for your nonvonvex nonlinear optimization problem?
Write your own heuristic?
How do you implement it?
In psycholinguistic norming studies 15-20 raters per word per scale are somehow rule of the thumb. However, I cannot find the psychometric explanation or justification for this, although. Does anyone have the reference considering this question?
How to ensure that my ga code is good enough?
I guess I should test it on various test functions that are different in structure, that's one obvious thing but, what about the number of chromosomes one start with?
Is it a problem dependant? same question on the number of runs.
I would be very grateful if you could send me also some helpful papers on that issue.
Thanks!
Dear Experts,
Greetings!
Looking for your kind opinion and Ref regarding the following CB-SEM issues
Being a new user of AMOS, few of us are facing problem regarding a few specific questions...
Q1: In the model, if the latent variable has 2/3 sub-constructs and each of those sub-constructs have 3,4 and 2 items respectively then is it correct? ( As in many articles its mentioned the rule of thumb is to have minimum 3 items).
Q2: If any latent variable has 2 sub-constructs and during respecified SEM, one of the sub- construct's factor loading comes .50 should we keep that construct? If we need to take out then should we need to drop the variable?
Q3: During the respecified SEM model, if GFI, TLI, CFI, RMSEA met the threshold but p-value shows .000 then will we call that model as the good fit model or we need to make sure p is .05 and above ( Is Must).
Q4: For continuous moderator ( e.g. Work-Life Balance or Locus of Control), in case of latent construct do we need to have 2 models named constrained and unconstrained (Bryne, 2004). ( If this is the only process in AMOS can we have the steps)
Though its a quite long text but this gonna make analysis easy for new AMOS user, including myself😊
Any expert opinion and ref highly appreciated.
Thanks,
To optimize a production system by planning ~1000 timesteps ahead I try to solve an optimization problem with around 20000 dimensions containing binary and continuous variables and several complex constraints.
I know the provided information is little, but can someone give a hint which approach would be suitable for such big problems? Would you recommend some metaheuristic or a commercial solver?
Within my Grounded Theory study I want to analyse attitudes and opinions of a group of people towards a specific topic. Therefore, I would like to adapt the coding paradigm of Strauss & Corbin to this research goal. I read in literature that this is not only possible but also recommendable but I am just not sure how to do it. Has anyone experience in doing so?
Various meta heuristic techniques are: PSO, ACO, GA etc. I want to know that which one is best to apply in the area of routing optimization in VANET.
If ignoring the interpretative significance of feature selection mechanism what makes feature selection worth performing as compared to feature extraction algorithms which is much focused on deriving out the axis where features are independent and more discriminate.Ultimately we are trying to achieve same sets of goal like features to be independent and maximally correlated with output variable or dimension reduction etc
So my question is there any heuristic or some approach, so that we can find out when to go with feature selection or feature extraction based on our application or we just try out both the both of them and find out which works better
If any one can point out some research paper focused on this can help me out.
We are conducting the research that includes robot learning and machine learning using ANFIS (Adaptive Neuro-fuzy Reference System).
I have a question about the recent machine learning study.
Comparing with five years ago, for training the machine learning algorithms, there are fewer citations from studies that use heuristic methods such as Particle Swarm Optimization (PSO) or Genetic Algorithm (GA) methods.
Previous studies have shown that these methods are effective, but why do they not much used these days?
I am looking for references to show that GA solutions do not necessarily converge to optimal solution to defend the use of an integer program, exact solution. I want to criticize heuristic and meta-heuristic algorithm, especially GA.
I think a book might be a good reference but not sure which one to use!
To be more specific if i have ship speeds of 3m/s , 6m/s , 9 m/s and maximum of 13 m/s , how can i relate these speed values to the propeller rpm so that the ship can be maneuver at those speeds.
For HPLC, what different mobile phases are best to start with for methods development? What is a good approach to trying different buffers/motile phases for methods development?
I've looked through a few papers, but as I'm new to using HPLC, wanted to know if there is a good 'rule of thumb' of different types of buffer solutions to try first in methods development, and why.
I calculated Lin's (1989) concordance correlation coefficient to get the test-retest reliability (rtt).
I had 78 subjects and the second assessment occurred after 6 weeks.
Are there any references for rules of thumb of what is considered a good/acceptable value. I read values between .7-.9 for Pearson correlations, but did not find values for the concordance correlation coefficient (which is usually a bit lower). Further, rules of thumb are oftentimes stated without references.
I appreciate any suggestions.
I'm looking for a rule of thumb to work out the equivalent potential to CuCUSo4 if I'm using calomel reference electrode?
I have recently begun analyzing electrophysiology data and I want to make sure that I am recording on true events above the noise of the recording.
Is there a way to determine a good threshold setting. Some events are clearly true responses, but some are smaller and may still be events. How does one determine this?
Is there a 'Rule of Thumb'?
The Genetic Fallacy is an informal fallacy of reasoning — viz. one of the so-called fallacies of irrelevance – in which an argument or claim is based on someone's or something's history, origin, or source, i.e. when an idea or argument is either accepted or rejected because of its source, rather than – allegedly – its merit.
Are there any circumstances under which an argument based on an idea's or a concept's origin might have merit? Please explain and/or give an example.
Hi,
I have no experience in DNA fragment analysis, but for a part of a project I need to send samples to be analysed by capillary / fragment analysis.
How much is usually the quantity needed to differenciate two samples ? I just need a rule of thumb : is it 0.5µL (as I can read in some parts of the manual of a fragment analyser) or 2µL, 10µL?
Best,
I have proposed a new set of heuristics for evaluating the usability of mobile application's interfaces that are designed in a different language rather than English.
Currently, I am testing this set through a Heuristic Evaluation (HE) procedure. there is also another HE that is using Nielsen. So both heuristics sets are going to evaluate a sample of mobile apps. The obtained usability problems will be used to determine which set is better when it comes to evaluate mobile apps with non English interfaces.
How can I say that my heuristics scored better than Nielsen in term of the language differences, because Nielsen heuristics are broad and can detect so many problems, but may not be related to the language. So I need to ignore those which are not related to the language. Is there a method that can be used as a base line to differentiate the found problem based on the language context?
Thanks
I am working on a seascape genetics study, and I am using 30 microsats. I know Fst can be skewed by highly polymorphic loci. I have ran an analysis with interesting results, but I want to make sure it is not the metric. Is there a rule of thumb about the other genetic distance metrics that are better for microsats? I recently read a paper about AFD, but I do not think there is software available to calculate it.
Hello
I am trying to design a nature-inspired algorithm(heuristic algorithm) for robotic path planning. But I wonder if there are reference about designing heuristic algorithms and about the general steps or general form to design these sort of algorithms.
Thanks a lot in advance,
Valid inequalities for Branch and Cut algortithm for VRP or Location Routing Problem : I am looking for a reference to apply the comb inequality to the location routing problem. It must detect when a client node is on two routes linked to two different facilities.
In the mixed-variable heuristic optimization domain, what is done when a categorical variable determines the existence of continuous or ordered discrete variables in each possible solution?
To illustrate, imagine an optimization problem to determine the best tool to cut paper.
In this problem, a variable tool can have the values "knife" or "scissors".
- If its value is "scissors", there's the continuous-valued blade_size variable.
- If it's "knife", there is the same blade_size continuous variable and also a num_of_teeth discrete variable
How can I deal with this problems using some metaheuristic designed to hadle categorical, continuous and discrete ordered variables?
My first tought was to set the problem to the max possible dimensionality and, after choosing the value of the categorical variable, select (if commands) which other variables are going to be optimized and used to evaluate the solution.
This probably will work, but it seems naive to me. Do other more sophisticated methods to deal with this kind of problem exists? If yes, what are these methods?
As can be read in the webpage of the metaheuristics network (http://www.metaheuristics.org), a metaheuristic is a set of concepts that can be used to define heuristic methods that can be applied to a wide set of different problems. In other words, a metaheuristic can be seen as a general algorithmic framework which can be applied to different optimization problems with relatively few modifications to make them adapted to a specific problem. Examples of metaheuristics include evolutionary algorithms, simulated annealing, tabu search, iterated local search, and ant colony optimization. Metaheuristics have been widely used to solve different combinatorial (and numerical) optimization problems, with the goal of obtaining a very good solution (but perhaps not the optimum) to NP-complete problems in which exact search methods are intractable even for small problem sizes.