Science topic

# Algorithms - Science topic

Explore the latest questions and answers in Algorithms, and find Algorithms experts.
Questions related to Algorithms
• asked a question related to Algorithms
Question
Hi,
Does anyone know a good way to mathematically define/identify the onset of a plateau for a curve y = f(x) in a 2D plane?
A bit more background: I have a set of curves from which I'd like to extract the x values where the "plateau" starts, by applying a consistent definition of plateau onset.
Thanks,
Yifan
Hi!
I would propose to look for points where the derivative is close to zero.
You can identify values of x where the absolute value of the derivative is less than a small threshold value ε (this should have a small value like 1e-5 so that you can capture values near to zero).
The plateau onset can be determined by finding the minimum and maximum x-values where the derivative is close to zero. This will give you the range of x-values where the plateau starts.
• asked a question related to Algorithms
Question
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
If tools such as ChatGPT, after the necessary update and adaptation to current Internet technologies, are combined with search engines developed by Internet technology companies, search results can be shaped by certain complex algorithms, by generative artificial intelligence learned to use and improve complex models for advanced intelligent search of precisely defined topics, intelligent search systems based on artificial neural networks and deep learning. If such solutions are created, it may involve the risk of deliberate shaping of algorithms of advanced Internet search systems, which may generate the risk of interference and influence of Internet search engine technology companies on search results and thus shaping the general social awareness of citizens on specific topics.
What is your opinion on this issue?
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
• asked a question related to Algorithms
Question
How to create a system of digital, universal tagging of various kinds of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
How to create a system of digital, universal labelling of different types of works, texts, texts, photos, publications, graphics, videos, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to create a system for the digital, universal marking of different types of works, texts, photos, publications, graphics, videos, innovations, patents, etc. made by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
How to create a system of digital, universal labelling of different types of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
What is your opinion on this subject?
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Some technological companies offering various Internet services have already announced the creation of a system for digital marking of creations, works, works, studies, etc. created by artificial intelligence. Probably, these companies have already noticed that in this field it is possible to create certain standards for digital marking of creations, works, elaborations, etc. created by artificial intelligence, and this can be another factor of competition and market advantage.
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
• asked a question related to Algorithms
Question
Blockchain is a distributed database of immutable records called blocks, which are secured using cryptography. There are a previous hash, transaction details, nonce, and target hash value. Financial institutions were the first to pay notice to it, as it was in simple words a new payment system.
Block is a place in a blockchain where data is stored. In the case of cryptocurrency blockchains, the data stored in a block are transactions. These blocks are chained together by adding the previous block's hash to the next block's header. It keeps the order of the blocks intact and makes the data in the blocks immutable.
A block is like a record of the transaction. Each time a block is verified, it gets recorded in chronological order in the main Blockchain. Once the data is recorded, it cannot be modified.
Blocks are data structures within the blockchain database, where transaction data in a cryptocurrency blockchain are permanently recorded. A block records some or all of the most recent transactions not yet validated by the network. Once the data are validated, the block is closed. Then, a new block is created for new transactions to be entered into and validated.
A block is thus a permanent store of records that, once written, cannot be altered or removed...
Blocks and blockchains are not used solely by cryptocurrencies. They also have many other uses...
• asked a question related to Algorithms
Question
Hello Everyone, I want to perform division operation in Verilog - HDL. Please suggest me an algorithm for division in which the clock cycle taken by division operation is independent on input. That is for division of any number a (a can be any number) by b(b can be any number),same number of clock cycle wil be taken by division operation for different set of a and b.
Here is a useful link that I found, with the block diagrams and Verilog codes
• asked a question related to Algorithms
Question
🔴Human-GOD coevolution & the Religion-type of the future (technology; genetics; medicine; robotics; informatics; AI Algorithm & Quantum PC;..)🔴
according to (possible) technological and informatics evolution of Homo sapiens from 2000 to 2100,
what Religion-type show the best resistance-resilience??
What will be the main Religion-type in the future??
--One GOD.
--Multiple and diverse GOD.
--Absence of Transcendence.
Moreover: GOD will be an evolutionary step of Human??
In this link (chapter pag.40) there is a quasi-fantasy scenario ...
Other papers of this series
🟥
Is this key🔴 area {Posterior Cortical “Hot Zone,”}, the brain zone were the protein of #self are stored?? A new paper🔻 shows some intriguing results. See VanGELO Assoluto for an extreme view...
🔻🔻Surge of neurophysiological coupling and connectivity of gamma oscillations in the dying human brain. PNAS, May 1, 2023, vol.120. PMID:37126719 -- DOI:10.1073/pnas.2216268120🔻🔻
Significance.--- Is it possible for the human brain to be activated by the dying process? We addressed this issue by analyzing the electroencephalograms (EEG) of four dying patients before and after the clinical withdrawal of their ventilatory support and found that the resultant global hypoxia markedly stimulated gamma activities in two of the patients. The surge of gamma connectivity was both local, within the temporo–parieto–occipital (TPO) junctions, and global between the TPO zones and the contralateral prefrontal areas. While the mechanisms and physiological significance of these findings remain to be fully explored, these data demonstrate that the dying brain can still be active. They also suggest the need to reevaluate role of the brain during cardiac arrest.
Abstract.--- The brain is assumed to be hypoactive during cardiac arrest. However, animal models of cardiac and respiratory arrest demonstrate a surge of gamma oscillations and functional connectivity. To investigate whether these preclinical findings translate to humans, we analyzed electroencephalogram and electrocardiogram signals in four comatose dying patients before and after the withdrawal of ventilatory support. Two of the four patients exhibited a rapid and marked surge of gamma power, surge of cross-frequency coupling of gamma waves with slower oscillations, and increased interhemispheric functional and directed connectivity in gamma bands. High-frequency oscillations paralleled the activation of beta/gamma cross-frequency coupling within the somatosensory cortices. Importantly, both patients displayed surges of functional and directed connectivity at multiple frequency bands within the posterior cortical “hot zone,” a region postulated to be critical for conscious processing. This gamma activity was stimulated by global hypoxia and surged further as cardiac conditions deteriorated in the dying patients. These data demonstrate that the surge of gamma power and connectivity observed in animal models of cardiac arrest can be observed in select patients during the process of dying.
• asked a question related to Algorithms
Question
How should a system of institutional control over the development of advanced artificial intelligence models and algorithms be built so that this development does not get out of hand and lead to negative consequences that are currently difficult to predict?
Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built?
Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why do the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc., now call for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology takes place fully under control and does not get out of hand?
Should the development of artificial intelligence be controlled? And if so, who should exercise this control? How should an institutional control system for the development of artificial intelligence applications be built?
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee?
What do you think?
What is your opinion on the subject?
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The open letter[1] is wishful thinking at best. Let us sample some of the statements provided in the letter:
• "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
The concept of manageable risk differs from individual to individual much less gather consensus on a population on what is a " manageable" risk
• "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
This is the most naive statement of the letter. What can be accomplished in 6 months? We have not even gathered consensus on the simple philosophical approach to ethics in the history of philosophy, much less on how to approach AI from an ethical standpoint. stopping AI research for 6 months will not have a significant impact (there are even journals dedicated to AI ethics in print tackling the issue for several years and there is pretense to solve this in 6 months!).
• "implement a set of shared safety protocols ...These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."
The concept of beyond reasonable doubt will eventually fall on what is reasonable practice in the field and this leads to weak safety protocols.
• "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
This statement cannot be more ambiguous in term definition and the references of the letter do not help.
The concept of stopping AI research should not be confused with regulating the commercial use of the technology. The letter aims at the wrong issue. Research cannot be stopped for ML, since anyone with a reasonable amount of computing power (which you can buy anyway in amazon) can continue to work in the subject. But, regulating business is easier since the major players will have the greatest impact in slowing down the misuse of the technology. also by regulating business more targeted restrictions on its use can be applied rather than vague lists of research topics.
References
• asked a question related to Algorithms
Question
(say for example 200)
LMS algorithm for Adaptive linear prediction.
• asked a question related to Algorithms
Question
Dear colleagues,
I was wondering whether there are any ways or any softwares that I can use to analysis the muscle cross sectional area in my H&E histology images?
I tried to use ImageJ thresholding, unfortunately it does not work efficiently for me. Thus, I was wondering whether there are any currently established methods.
Thank you very much in advance.
I also want to calculate muscle fibers CSA in H&E staining using imageJ. Thresholding cannot be used but manually it can be done and it will be time taking process. I want to know about how many and how the fibers are randomly selected in a sample field, how many sample fields will be needed and what should be the magnification?
• asked a question related to Algorithms
Question
I have seen the implementation of L-BFGS-B by authors in Fortran and ports in several languages. I am trying to implement the algorithm on my own.
I am having difficulty grasping a few steps. Is there a worked out example using L-BFGS or L-BFGS-B ? Something similar to (attached link) explaining the output of each step in an iteration for a simple problem.
• asked a question related to Algorithms
Question
I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.
I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.
Shahboz Sharifovich Qodirov Thanks for your suggestions.
• asked a question related to Algorithms
Question
Kindly suggest which routing algorithm is better to implement for finding the optimal route in wireless adhoc networks?
Performance criteria :end to end delay , packet delivery ratio, throughput
There is no specific answer to your question. to choose the best routing algorithm in an Ad hoc network you must specify the type of application, the size of the network, and the mobility model.
The most known routing protocols are
1- AODV and DSR as reactive protocols.
2- OLSR and DSDV as proactive Protocols.
3- ZRP and TORA as Hebrid Protocols.
I recommend to read and cite the related paper
• asked a question related to Algorithms
Question
I want to understand C5.0 algorithm for data classification , is there any one have the steps for it or the original paper that this is algorithm was presented in ?
Dear Nouran,
by Jansson 2016.
• asked a question related to Algorithms
Question
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
• asked a question related to Algorithms
Question
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
• asked a question related to Algorithms
Question
I am trying to find the best case time complexity for matrix multiplication.
• asked a question related to Algorithms
Question
Its going to be a huge shift for marketers, tracking identity is tricky at the best of times with online/offline and multiple channels of engagement - but when the current methods of targeting, measurement and attribution get disrupted, its going to be extremely difficult to get identity right to deliver exceptional customer experiences whilst getting compliance right.
We have put our framework and initial results show promising measurement techniques including Advanced Neo-classical fusion models (borrowed from Financial industry, Biochemical Stochastic & Deterministic frameworks) and applied Bayesian and Space models to run the optimisations. Initial results are looking very good and happy to share our wider thinking thru this work with everyone.
Please suggest how would you be handling this environmental change and suggest methods to measure digital landscape going forward.
#datascience #analytics #machinelearning #artificialintelligence #reinforcementlearning #cookieless #measurementsolutions #digital #digitaltransfromation #algorithms #econometrics #MMM #AI #mediastrategy #marketinganalytics #retargeting #audiencetargeting #cmo
Here are a few ideas about how marketers can do this:
• Encourage site login by better authenticated experiences or other consumer-oriented rewards to increase the number of persistent IDs.
• Create a holistic customer view by combining customer and other owned first-party data (e.g., web data) and establishing a persistent cross-channel customer ID.
• Allow customer segmentation, targeting, and measurement across all organizations and platforms. Measurement and audience control can be supported by integrating martech and ad tech pipes wherever possible.
• asked a question related to Algorithms
Question
Apart from Ant Colony Optimization, Can anyone suggest any other Swarm based method for Edge Detection of Imagery?
• asked a question related to Algorithms
Question
Hi ALL,
I want to use a filter for extraction of only ground points from the airborne LiDAR point cloud. Point clouds are for urban areas only. Which filter or algorithm or filter or software is considered to be the best for this purpose. Thanks
You may use the freely available ALDPAT software, which implements several ground filtering methods. You may also want to use the CloudCompare software, which incorporates the CSF algorithm, one of the most efficient ground filtering procedures proposed so far.
• asked a question related to Algorithms
Question
This is related to Homomorphic encryption. These three algorithms are used in additive and multiplicative homomorhism. RSA and El gamal is multiplicative and Pallier is additive.Now i want to know what is the time complexity of these algorithms.
Want the encryption and decryption time complexity when used by pallier cryptosystem
• asked a question related to Algorithms
Question
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....
Apart from the foregoing mentioned discussion, all metaheuristic optimization approaches are alike on average in terms of their performance. The extensive research studies in this ﬁeld show that an algorithm may be the topmost choice for some norms of problems, but at the same, it may become to be the inferior selection for other types of problems. On the other hand, since most real-world optimization problems have different needs and requirements that vary from industry to industry, there is no universal algorithm or approach that can be applied to every circumstance, and, therefore, it becomes a challenge to pick up the right algorithm that sufficiently suits these essentials.
A discussion of this issue is at section two of the following reference:
• asked a question related to Algorithms
Question
Since the early 90’s, metaheuristic algorithms have been continually improved in order to solve a wider class of optimization problems. To do so, different techniques such as hybridized algorithms have been introduced in the literature. I would be appreciate if someone can help me to find some of the most important techniques used in these algorithms.
- Hybridization
- Orthogonal learning
- Algorithms with dynamic population
The following current-state-of-the-art paper has the answer for this question:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or simply refer to the same paper at the following address:
• asked a question related to Algorithms
Question
May I ask what is the current state-of-the-art for incorporating FEM with Machine Learning algorithm? My main questions, to be specific, are:
1. Is it a sensible thing to do? Is there an actual need for this?
2. What would be the main challenges?
3. What have people tried in the past?
Very good questions!
Here is my response based on my experience in developing state-of-the-art FE schemes for solid mechanics, fluid mechanics and multiphysics.
1.) Is it a sensible thing to do? Is there an actual need for this?
No. I don't think so.
You are one of the few who asked this question. ML for FEM, or for that PDEs altogether, is just part of the ongoing craze about ML/AI.
2.) What would be the main challenges?
ML is nothing but curve/surface fitting. Nothing new mathematically. Difficulties are even worse than what we face with the least-squares finite element method because of poorly conditioned matrices. No one talks about this because many don't even know such issues exist. Just GIGO.
The main challenge is getting access to a huge amount of computing power, especially GPUs. Generating the data is not an issue since it is done by running a lot of direct numerical simulations.
3.) What have people tried in the past?
Some tried and are still trying, but mostly nonsense if you know FEM. Gets published because it is the "trend".
No one considers the cost incurred in training the models in the discussion.
More or less generates 1000s of data sets to train a model which will subsequently be used for 10s of simulations, wasting about 90% of resources.
I have not seen anyone demonstrating the applicability of such ML models for changes in geometries (addition/removal of holes, fillets etc.), topologies (solid/solid contact and fracture), mesh (coarsening/refining, different element shapes), and constitutive models. Most probably due to obvious reasons.
• asked a question related to Algorithms
Question
There are PIDs but usually only the Proportional part of the PID algorithm is usually used
Mapping systems, as used in diesel engines
But make a several layer PIDs is difficult.
Map based systems (as example used in turbines or diesel engines) needs a lot of testing and works usually with new machines in controlled conditions
It would be better using an algorithm that adapt and slow increases or decreases control signal in order to obtain maximum performance.
Also some algorithm should advise of modifications out of expected values to advise about problems, making an efficient diagnosys of the system
I should need to use this kind of algorithm to control my simulations to reduce number of simulations but also to control my Miranda and Fusion Reactors
Perhaps some of the algorithms can be: Neural Networks, MultiLayer Perceptrons (MLP) and Radial Basis Function (RBF) networks. Also the new Support Vector Regression (SVR)
I made an algorithm that theoretically reaches the end solution in the minimum time:
1. Set delta = (max-min)/2 for every parameter
2. The algorithm varied from center value to +1/2 delta and -1/2 delta one of the parameters and see which result is the best, then center that value to that
3. The same with all the parameter
4. Divide delta/2
5. Goto 2 until delta=minimum
The problem is that vary so much one parameter in one REAL machine it would be broken or stopped
Perhaps it is a better solution to go from the center, use delta=minimum delta, and going up multiply by 2 every time, then begin to divide again by 2 when entering in a second condition (to be defined)
• asked a question related to Algorithms
Question
I would be grateful for suggestions to solve the following problem.
The task is to fit a mechanistically-motivated nonlinear mathematical model (4-6 parameters, depending on version of assumptions used in the model) to a relatively small and noisy data set (35 observations, some likely to be outliers) with a continuous numerical response variable. The model formula contains integrals that cannot be solved analytically, only numerically. My questions are:
1. What optimization algorithms (probably with stochasticity) would be useful in this case to estimate the parameters?
2. Are there reasonable options for the function to be optimized except sum of squared errors?
Dear;
You can solve the integrals manually or by a software.
Regards
• asked a question related to Algorithms
Question
in pso algorithm for solving an issue with chenging the sequents of element of representative solution ,all elements will chenge and because of elemants dependence to each other, this algorithm didn't go to a
how can we solve this problem and converge the algorithm?
-----
representative solution Included two parts.
in  first part Included permutation of integers that coded  continuous
between 0 and 1
and in fitness function will be decoded to integers
and second part included continuous  number between 0 and 1
direction of second part is depended to values of the first part
----
there is no problem with duplicate answer in permutation because of considering fixer procedure
Dear;
With the increasing demands in solving larger dimensional problems, it is necessary to have efficient algorithm. Efforts were put towards increasing the efficiency of the algorithms. This paper presents a new approach of particle swarm optimization with cooperative coevolution.
Article :
A New Particle Swarm Optimizer with Cooperative Coevolution for Large Scale Optimization
Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014.
Regards
• asked a question related to Algorithms
Question
For Example, I have a South Carolina map comprising of 5833 grid points as shown below in the picture. How do I interpolate to get data for the unsampled points which are not present in 5833 points but within the South Carolina(red region in the picture) region? Which interpolation technique is best for a South Carolina region of 5833 grid points?
Dear Vishnu,
in which format is the data, which you would like to interpolate, available: NetCDF, ASCII-Text, Excel, ... ?
• asked a question related to Algorithms
Question
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
In my experience in suggest for this problem to use supervised by using the Artificial Neural Networks, we can you find different architecture have the background or inspiration from comportment of human, or example NN, CNN, RNN... etc.
• asked a question related to Algorithms
Question
What are the trusted books and resources I can learn from?
I highly recommend Introduction to algorithms. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms. MIT press.
• asked a question related to Algorithms
Question
In an AEC configuration as shown in the Fig.1, let us consider u(n), the far-end speech signal and x(n), the near-end speech signal. The desired signal is d(n)=v(n)+x(n), where v(n) is the echo signal generated from the echo path impulse response. The purpose of an adaptive filter W is to find an echo estimate, y(n) which is then subtracted from the desired signal, d(n) to obtain e(n).
When I am implementing the APA adaptive algorithm for echo cancellation, I am observing leakage phenomenon which is explained as follows:
y(n) will contain a component proportional to x(n), that will be subtracted from the total desired signal. This phenomenon is in fact a leakage of the x(n) in y(n), through the error signal e(n); the result consists of an undesired attenuation of the near-end signal.
Because of this near-end leakage phenomenon, there is near-end signal suppression in the post-processing output.
I am handling the double-talk using robust algorithms without the need for DTD.
I kindly request to suggest me how to avoid near-end signal leakage in the adaptive filter output y(n)?
To avoid this problem, there are two methods, using DTD, or by modifying your algorithm in its updates by incorporating the statistics of near-end signal to control the output, as done by many authors (such as Variable Step Size).
• asked a question related to Algorithms
Question
Can anyone suggest a data compression algorithm to compress and regenerate data from sensors(eg. accelerometer-it consists of time & acceleration) that are used to obtain structural vibration response?I have tried using PCA but i am unable to regenerate my data.Kindly suggest a suitable method or some other algorithm to go with PCA?
Saranya Bharathi I am wandering , which algorithm did you use ?
• asked a question related to Algorithms
Question
What will be a suitable model for solving the regression problem? Is there any hybrid algorithm or new model/framework exist to solve this problem in deep learning. How much deep learning is promising for regression tasks.
It is very similar to the use of deep learning for the classification problem. Just you use different layers at the end of the network. e.g. in CNN instead of a softmax layer and cross-entropy loss, you can use a regression layer and MSE loss, etc.
It will be as useful as deep classification networks. But it depends on your data and problem. RNNs (especially LSTMs) are useful for time-series and sequential data such as speech, music, and other audio signals, EEG and ECG signals, stock market data, weather forecasting data, etc.
If you are using MATLAB, here are two examples (CNN and LSTM) for the implementation:
• asked a question related to Algorithms
Question
I need implement the epsilon constraint method for solve multi objective optimization problems, but I don´t know how to choose each epsilon interval and neither when to terminate the algorithm, that is to say the stopping criteria.
Hi my dear friend
Good day!
I think the below book will help you a lot to provide relevant codes.
Messac, A. (2015). Optimization in practice with MATLAB®: for engineering students and professionals. Cambridge University Press.
best regards.
Saeed Rezaeian-Marjani
• asked a question related to Algorithms
Question
It seems that the quadprog function of MATLAB, the (conventional) interior-point algorithm, is not fully exploiting the sparsity and structure of the sparse QP formulation based on my results.
In Model Predictive Control, the computational complexity should scale linearly with the prediction horizon N. However, results show that the complexity scales quadratically with the prediction horizon N.
What can be possible explanations?
I gave the wrong information. Quadprog is not exploiting the sparsity and structure of the sparse QP formulation at all. So, the computation complexity scales cubically with N.
The barrier algorithm of Gurobi was not fully exploiting the sparsity and structure of the sparse QP formulation. So, the computation complexity scales cubically with N. Do you have some documentation about this? At the Gurobi website, I could not find anything relevant to answer this question.
Could you maybe also react to my last topic about why interior point algorithm of Gurobi and quadprog give same optimized values x and corresponding objective function value , but the first-order solver GPAD gives the same optimized values x, but another objective function value which is a factor 10 bigger?
Regards
• asked a question related to Algorithms
Question
When doing machine learning, do we normally use several algorithms for comparison? For example, if the RMSE of SVM is 0.1, how do I come up with the conclusion that this model performed well? Just based on saying RMSE value is low, so the result is good? But if there is no comparison, how do I say it is low?
Or shall I include other algorithms e.g. random forest etc to do a comparison of the value? I intended to use only SVM regression, but now I am a bit stuck at the interpretation of the results.. Thank you in advance!
Sorry to differ, even if one's contribution is a unique method, it is still better that he/she compares different regression models to complete the picture (recommendation of the best regression model that can accompany his/her method, cannot be done without comparison).
• asked a question related to Algorithms
Question
Dear all,
I have a watershed and wish to randomly split the watershed into different 'artificial' farms, and the farm area should follow an exponential distribution as found empirically from other studies. The 'artificial' farm could be rectangular or any other shape.
Is there any way to do this in GIS or other software? Any method achieved through shapefile or raster can be accepted.
Thank you!
Pan
An interesting sampling problem. I guess the bottom line is the brute force MCS with Delaunay triangulation. Adding some regularization to make it an optimization problem may work better. Sorry for not being very helpful.
• asked a question related to Algorithms
Question
Hell, everyone. I am a student of electrical engineering and my research field is related to the optimization of a power system.
I know that the algorithm that we should choose depends on our problem but there are lots of heuristics, metaheuristic algorithms available to choose from. It will also take some time to understand a specific algorithm and after that maybe we came to know that the chosen algorithm was not the best for my problem. So as per my problem how can I choose the best algorithm?
Is there any simple solution available that can save my time as well?
Thank you for your precious time.
As most people have indicated the best solution depends on the 'surface' you are optimising and the number of dimensions. If you have a large number of dimensions and a smooth surface then traditional methods that use derivatives (or approximations to derivatives) work well such as the Quasi-Newton Method. If there are a small number of dimensions and the surface is fairly sensible but noisy then the Nelder and Mead Simplex works well. For higher dimensions with noise but still farily sensible (hill like) then simulated annealing works. The surfaces which are discontinuous and mis-leading are best addressed with the more modern heuristic techniques such as evolutionary algorithms. If you are trying to find a pareto-surface then use a multi-objective genetic algorithm. So the key things are how many dimensions, is the surface reasonably smooth (reliable derivatives), do you want a pareto surface or can you run multiple single criterion optimisations. The other questions is, do you need to know the optimum or do you just want a very good result. There are often good algorithms for approximations to the best result, for example using a simplified objective function which can be found much faster to get a good rough solution which may be the starting point for a high fidelity solution. Sorry if this indicates it is complex, it really does depend on the solution space. Do not forget traditonal mathematical methods used in Operational Research as well. Good Luck!
• asked a question related to Algorithms
Question
i want to execute apriori algorithm of association rule in datamining through MATLAB
I work in the same topic and I use
it works with small dataset but with large database like mushroom or BMS1 or other datasets it takes time.
who has solution about that?@ Shafagat Mahmudova
• asked a question related to Algorithms
Question
I'm trying to implement a fall detection algortihm written in C in a Zybo board.
I don't know how to start even if already did the tutorials related to Zynq7000.
Thank you for any help.
V. Carletti, A. Greco, A. Saggese, and M. Vento, "A smartphone-based system for detecting falls using anomaly detection," in ICIAP 2017 Proceedings, 2017. [Online] Available: https://link.springer.com/chapter/10.1007/978-3-319-68548-9_45
V. Carletti, A. Greco, V. Vigilante, and M. Vento, "A wearable embedded system for detecting accidents while running," in VISAPP 2018 - Proceedings of the International Conference on Computer Vision Theory and Applications, 2018. [Online] Available: https://www.scitepress.org/Papers/2018/66128/66128.pdf
• asked a question related to Algorithms
Question
I am currently working on a binary classification of EEG recordings and I came across with CSP.
As far as I understand, CSP allows you to choose the best features by maximizing the variance between two classes, which is perfect for what I'm doing. Here follow the details and questions:
- I have N trials per subject, from which half belongs to class A and the other half to class B.
- Let's say I want to apply CSP to this subject trials. From what I understood, I should apply CSP to all my trials (please correct me if I'm wrong here). Do I arbitrarily choose which trial from class A to compare with one from class B? Is the order by which I do it, indifferent?
- After CSP I should get the projection matrix (commonly wrote at W), from which I can obtain the transformed signal and compute the variances (part of which will be my features). Why does the computation of the variance is transformed into a log function in most papers?
Thank you very much
The projection matrix W is essentially the eigen-decomposition of the covariance of the data matrix X. Alternatively this can also be done using singular value decomposition (SVD) which is a more efficient way of handling high dimensional data as this avoids calculation of the large matrix XT.X. Apply SVD to the entire dataset and then make a plot of the cumulative sum of the SVs against the number of SVs. This can help in selecting the proper number of SVs from the plot which accounts for say 95% of the variances in the data. Log-scale specifies relative changes, while linear-scale specifies absolute changes.
• asked a question related to Algorithms
Question
For lightweight encryption.
Respected Sir,
• asked a question related to Algorithms
Question
I am working on optimizing well placement in the condensate reservoir model using an algorithm. Any kind of code example will be appreciated.
• asked a question related to Algorithms
Question
Hello,
I am working with a convex hull in n-dimensions (n>3) and I am having problems generating points on the convex hull surface. Ideally, I would like the points to be uniformly distributed or almost uniformly distributed. I am mostly looking for something simple to understand and implement.
(I am using scipy.spatial.ConvexHull python library)
Any help would be greatly appreciated :)
edit: Thank you very much for the answers already given.:) I have reformulated the question hoping to remove any confusion.
Thanks,
Noemie
Dear Noemie,
I don't think that the question can really be answered without posing it in a more precise manner. First and foremost would be your expectations about the meaning of a uniform distribution of points on a complex hull...
However, if you are interested in convex hulls that have been generated from some process (and aren't easily relatable functions) then my thought would be to adapt one of the convex hull finding algorithms and take a series of random walks along the boundaries from some set of known initial points. These could be made to approximate whichever expected distances and variances you decide you are looking for.
An alternate approach would be to "walk" the boundary according to a n space grid again using one of the boundary finding algorithms and randomly select grid points. These could then be perturbed to correspond with your planned distance metric and distribution.
Good luck
• asked a question related to Algorithms
Question
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
That's a hardware issue and nothing else. Best, T.T.
• asked a question related to Algorithms
Question
In the ε-constraint method, one objective will be used as the objective function, and the remaining objectives will be used as constraints using the epsilon value as the bound. In this case:
- Do we need to apply penalty method to handle the constraint?
- How to select the best solution?
- How to get the final Pareto set?
You can perform a web search on "epsilon constrained method multi objective" which provides quite many hits on Google scholar - and see what others have done in the near past. It is a quite popular tool.
You may look at the useful links below:
I hope it can help u!
• asked a question related to Algorithms
Question
Dear colleagues ,
If you consider a "complete dense" multivariate polynomial, Is there exists a Horner factorization scheme like for a classical polynomial ?
(by "complete-dense", I mean all the possible monom up to a given global order, the number of monom being given by the known formula with combination C_r^(n+r) if am correct).
Hope you find the following article is useful
Best regards
• asked a question related to Algorithms
Question
Intuitively, Maximum Likelihood inference on high frequency data should be slow, because of the large data set size. I was wondering if anyone has experience with slow inference, I can make optimization algorithms to speed up the infrence then.
I tried this with Yacine Ait Sahalia work on estimating diffusion models, using his code, which (Unfortunately!) is pretty fast, even for large data set. Now does any one know any large slow high frequency financial econometric problem do let me know,
For large samples exact maximum likelihood can be approached reasonably well by faster estimation methods. But I do not understand why you want slow methods. As far as I know, Ait Sahalia code is good. Why do you say "(Unfortunately!)" ?
• asked a question related to Algorithms
Question
i want to study RPL in mWSN i am using NS2.35 Wsnet simulators, and cooja
can i find some algoritms source code in improving RPL?
RPL is a IPv6 based Routing Protocol for Low-Power and Lossy Networks. A Low-Power and Lossy Networks (LLNs) are a class of network. Here both routers and their interconnects are constrained devices.(i.e) processing power, memory, and energy consumption.RPL routing based on following principle: Destination Oriented Acyclic Graphs or DODAGs. step 1 : Open the Contiki OS with Vmware worksation. An login into Contiki user password: user . step2 : Now open the terminal in contiki desktop and make the right directories to run the cooja simulator tools. In terminal,
Go to the Directory : cd “/home/user/contiki/tools/cooja” ----> Press Enter
Give Command in terminal : ant run ------> Press Enter
After successful execution of above command. make file will build automatically and then Contiki Cooja Network simulator application tool will appear. It’s a blue color terminal.
• asked a question related to Algorithms
Question
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
• asked a question related to Algorithms
Question
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
• asked a question related to Algorithms
Question
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
All are approaches that exploits the computational intelligence paradigm. Machine learning is refered to data analitics. Evolutionary computation deal with optimization problems.
• asked a question related to Algorithms
Question
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
• asked a question related to Algorithms
Question
Dijkstra's algorithm performs the best sequentially on a single CPU core. Bellman-Ford implementations and variants running on the GPU outperform this sequential Dijkstra case, as well as parallel Delta-Stepping implementations on multicores, by several orders of magnitude for most graphs. However, there exist graphs (such as road-networks) that perform well only when Dijkstra's algorithm is used. Therefore, which implementation and algorithm should be used for generic cases?
Maleki et al. achieved improvements over delta-stepping:
Saeed Maleki, Donald Nguyen, Andrew Lenharth, María Garzarán, David Padua, and Keshav Pingali. 2016. DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem. In Proc.\ 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 32, DOI: https://doi.org/10.1145/2925426.2926287.
At the end of the Abstract they write:
"Our results show that DSMR is faster than the best previous algorithm, parallel [Delta]-Stepping, by up-to 7.38x".
Page 9, col 1, line -3:
"Machines: Two experimental machines were used for the evaluation: a shared-memory machine with 40 cores (4 10-core Intel(R) Xeon^TM E7-4860) and 128GB of memory; the distributed[-]memory machine Mira, a supercomputer at Argonne National Lab. Mira has 49152 nodes and each node has 16 cores (PowerPC A2) with 16GB of memory."
Best wishes,
Frank
• asked a question related to Algorithms
Question
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
There are tons of algorithms. Here a list:
DE
PSO
ABC
CMA-ES, etc
• asked a question related to Algorithms
Question
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.
Feel free to try it and comment your thoughts on it! I'm happy for any feedback.
Thank you
Michael
nice thinking
• asked a question related to Algorithms
Question
Intel's SGX extensions create isolated application enclaves, which disallow information leakage and unverified access to private data. However, SGX is now known to be broken as some works have leaked data on real hardware. What do such works exploit to break SGX's security invariants?
• asked a question related to Algorithms
Question
Dijkstra's algorithms performs well sequentially. However, applications require even better parallel performance because of real-time constraints. Implementations such as SprayList and Relaxed Queues allow parallelism on priority queue operations in Dijkstra's algorithm, with various performance vs accuracy tradeoffs. Which of these algorithms is the best in terms of raw parallel performance?
• asked a question related to Algorithms
Question
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
• asked a question related to Algorithms
Question
I need The Goldstein 2D branch cut algorithm in MATLAB. Does any one have a working version ?  I need to compare it with another algorithm.
Hi,
There are other algorithms to unwrap phase.
• asked a question related to Algorithms
Question
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
I was wondering wether any of you are familiar with some sort of an irrational algorithm. Does this exist? Let me know if you know something in this field, or what you think about it, anything helps!
Thank you
Michael
Dear Mi Sc ,
Simply put, serendipity is when the connection between things, leads to something positive.. So if serendipity is simply a series of positive connections that can be engineered, then serendipity can be calculated. In fact, I would go as far as saying that serendipity is an algorithm.
Regards,
Shafagat
• asked a question related to Algorithms
Question
Synchronization and memory costs are becoming humongous bottlenecks in today's architectures. However, algorithm complexities assume these operations as constant, which are done in O(1) time. What are your opinions in this regard? Are these good assumptions in today's world? Which algorithm complexity models assume higher costs for synchronization and memory operations?
Just adding to what was previously said. In specific purpose computers (embedded systems) it is posible to have hardware-based operations that are not necessarily O(1) (or any other complexity order for that matter) like their general purpose counterparts. Such is the case in certain devices designed for cryptology applications.
• asked a question related to Algorithms
Question
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
• asked a question related to Algorithms
Question
Graph algorithms such as BFS and SSSP (Bellman-Ford or Dijkstra's algorithm) generally exhibit a lack of locality. A vertex at the start of the graph may want to update an edge that exists in a farther part of the graph. This is a problem in graphs whose memory requirements far exceed those available in the machine's DRAM. How must the graph be streamed into the machine in this case? What are the consequences for a parallel multicore in such cases where access latency and core utilization are of utmost importance?
You could combine clusters of vertices to super-vertices, find a route through them and delete all vertices of the original graph whose vertices are not contained in a visited super-vertex. Then you proceed with a smaller graph and smaller super-vertices.
Regards,
Joachim
• asked a question related to Algorithms
Question
Or is it just an effective name to call adaptive and self-learning programmed algorithms?
Is a good question. It seems like the "intelligence" here comes from the human that created the artificial element and then lets it control the rest of his life.
• asked a question related to Algorithms
Question
In previous versions of opencv , there was an option to extract specific number of keypoints according to desire like
kp, desc = cv2.sift(150).detectAndCompute(gray_img, None)
But as in opencv 3.1 SIFT and other "non free" algorithms are moved to xfeatures2d ,so the function is giving error . Kindly tell me how can i set limit on the number of keypoints to be extracted using opencv 3.1. Thanks !
n_kp = 5
cv2.xfeatures2d.SIFT_create(n_kp)
• asked a question related to Algorithms
Question
I am looking for a public dataset for e-learning that I can use for testing performance and accuracy of Reccomender Systems algorithms. Anyone with an idea where I can find a public dataset?
Coursera MOOC's
• asked a question related to Algorithms
Question
I have been trying to implement BSA in Python and looks like the algorithm is pretty confusing. Has anyone implemented this algorithm? Any language.
Thanks.
Regards,
Akshay.
Update:
I have implemented the BSA algorithm here -> https://github.com/akshaybabloo/Spikes. If anyone needs it, please feel free to fork it.
Hi, the code is very resourceful. Can you also please upload singal recosntruction part (spike to analog signal)?
• asked a question related to Algorithms
Question
Some workloads or even inputs perform well on GPUs, while others perform well on multicores. How do we decide which machine to buy for a generic problem base for optimal performance? Cost is NOT taken as a factor here.
Besides cost, there are many factors you have to consider.
First, you are asking which hardware to use for a given algorithm or implementation, which I think is not the right question because a parallel algorithm is developed taking into consideration the hardware.
I'm not going to take the same approach if my solution is for a cluster using MPI, a multicore processor using OpenMP, or a manycore processor using CUDA.
So, before deciding the hardware and the algorithm you have to analyze the problem (you may be interested in looking for Foster's methodology). How can it be decomposed (many independent tasks, few coarse grain tasks, etc)? Is it regular (in its memory access pattern, in the operations done on data)? What's the size of the data (can it be fitted in a GPU memory or in the main memory)? Is it memory bound or computation intensive?
After this process, you can take the decision about the hardware and after that you can start developing the program.
Finally, when you have a functional program, you should start with a performance tuning process for maximizing the performance indexes of your interest (speedup, efficiency, power consumption, throughput, etc.).
Best!
• asked a question related to Algorithms
Question