Science topic

Algorithms - Science topic

Explore the latest questions and answers in Algorithms, and find Algorithms experts.
Questions related to Algorithms
  • asked a question related to Algorithms
Question
6 answers
Hi. I am looking for few names of articles/research papers focusing on current popular machine learning algorithms. Mostly summer/review papers publishing between 2016-2018.
Thanks in advance.
  • asked a question related to Algorithms
Question
9 answers
I have seen the implementation of L-BFGS-B by authors in Fortran and ports in several languages. I am trying to implement the algorithm on my own.
I am having difficulty grasping a few steps. Is there a worked out example using L-BFGS or L-BFGS-B ? Something similar to (attached link) explaining the output of each step in an iteration for a simple problem.
  • asked a question related to Algorithms
Question
9 answers
I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.
I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.
Relevant answer
Answer
Shahboz Sharifovich Qodirov Thanks for your suggestions.
  • asked a question related to Algorithms
Question
13 answers
Kindly suggest which routing algorithm is better to implement for finding the optimal route in wireless adhoc networks? 
Performance criteria :end to end delay , packet delivery ratio, throughput
Relevant answer
Answer
There is no specific answer to your question. to choose the best routing algorithm in an Ad hoc network you must specify the type of application, the size of the network, and the mobility model.
The most known routing protocols are
1- AODV and DSR as reactive protocols.
2- OLSR and DSDV as proactive Protocols.
3- ZRP and TORA as Hebrid Protocols.
I recommend to read and cite the related paper
  • asked a question related to Algorithms
Question
3 answers
I want to understand C5.0 algorithm for data classification , is there any one have the steps for it or the original paper that this is algorithm was presented in ?
Relevant answer
  • asked a question related to Algorithms
Question
12 answers
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
Relevant answer
Answer
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
  • asked a question related to Algorithms
Question
5 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Algorithms
Question
14 answers
Title: The Religion-type of the future vs. the human evolution via science, technology, informatics, AI-Algorithm.
Gentle RG-readers,
according to (possible) technological and informatics evolution of Homo sapiens from 2000 to 2100,
what Religion-type show the best resistance-resilience??
What will be the main Religion-type in the future??
--One GOD.
--Multiple and diverse GOD.
--Absence of Transcendence.
In this link (chapter pag.40) there is a quasi-fantasy scenario ...
.
.
.
Relevant answer
Answer
Thank you Professor Fatema Miah for your "deep" write about Science and Faith. I hope that this attempt to (chapter pag.40 of the linked book) is interesting.
.
.
Have a nice week and stay safe. ---sv--
  • asked a question related to Algorithms
Question
20 answers
I am trying to find the best case time complexity for matrix multiplication.
Relevant answer
  • asked a question related to Algorithms
Question
2 answers
Its going to be a huge shift for marketers, tracking identity is tricky at the best of times with online/offline and multiple channels of engagement - but when the current methods of targeting, measurement and attribution get disrupted, its going to be extremely difficult to get identity right to deliver exceptional customer experiences whilst getting compliance right.
We have put our framework and initial results show promising measurement techniques including Advanced Neo-classical fusion models (borrowed from Financial industry, Biochemical Stochastic & Deterministic frameworks) and applied Bayesian and Space models to run the optimisations. Initial results are looking very good and happy to share our wider thinking thru this work with everyone.
Link to our framework:
Please suggest how would you be handling this environmental change and suggest methods to measure digital landscape going forward.
#datascience #analytics #machinelearning #artificialintelligence #reinforcementlearning #cookieless #measurementsolutions #digital #digitaltransfromation #algorithms #econometrics #MMM #AI #mediastrategy #marketinganalytics #retargeting #audiencetargeting #cmo
Relevant answer
Answer
Here are a few ideas about how marketers can do this:
• Encourage site login by better authenticated experiences or other consumer-oriented rewards to increase the number of persistent IDs.
• Create a holistic customer view by combining customer and other owned first-party data (e.g., web data) and establishing a persistent cross-channel customer ID.
• Allow customer segmentation, targeting, and measurement across all organizations and platforms. Measurement and audience control can be supported by integrating martech and ad tech pipes wherever possible.
  • asked a question related to Algorithms
Question
7 answers
Apart from Ant Colony Optimization, Can anyone suggest any other Swarm based method for Edge Detection of Imagery?
  • asked a question related to Algorithms
Question
7 answers
Hi ALL,
I want to use a filter for extraction of only ground points from the airborne LiDAR point cloud. Point clouds are for urban areas only. Which filter or algorithm or filter or software is considered to be the best for this purpose. Thanks
Relevant answer
Answer
You may use the freely available ALDPAT software, which implements several ground filtering methods. You may also want to use the CloudCompare software, which incorporates the CSF algorithm, one of the most efficient ground filtering procedures proposed so far.
  • asked a question related to Algorithms
Question
8 answers
This is related to Homomorphic encryption. These three algorithms are used in additive and multiplicative homomorhism. RSA and El gamal is multiplicative and Pallier is additive.Now i want to know what is the time complexity of these algorithms.
Relevant answer
Answer
Want the encryption and decryption time complexity when used by pallier cryptosystem
  • asked a question related to Algorithms
Question
22 answers
Is there really a significant difference between the performance of the different meta-heuristics other than "ϵ"?!!! I mean, at the moment we have many different meta-heuristics and the set expands. Every while you hear about a new meta-heuristic that outperforms the other methods, on a specific problem instance, with ϵ. Most of these algorithms share the same idea: randomness with memory or selection or name it to learn from previous steps. You see in MIC, CEC, SigEvo many repetitions on new meta-heuristiics. does it make sense to stuck here? now the same repeats with hyper-heuristics and .....   
Relevant answer
Answer
Apart from the foregoing mentioned discussion, all metaheuristic optimization approaches are alike on average in terms of their performance. The extensive research studies in this field show that an algorithm may be the topmost choice for some norms of problems, but at the same, it may become to be the inferior selection for other types of problems. On the other hand, since most real-world optimization problems have different needs and requirements that vary from industry to industry, there is no universal algorithm or approach that can be applied to every circumstance, and, therefore, it becomes a challenge to pick up the right algorithm that sufficiently suits these essentials.
A discussion of this issue is at section two of the following reference:
  • asked a question related to Algorithms
Question
22 answers
Since the early 90’s, metaheuristic algorithms have been continually improved in order to solve a wider class of optimization problems. To do so, different techniques such as hybridized algorithms have been introduced in the literature. I would be appreciate if someone can help me to find some of the most important techniques used in these algorithms.
- Hybridization
- Orthogonal learning
- Algorithms with dynamic population
Relevant answer
Answer
The following current-state-of-the-art paper has the answer for this question:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or simply refer to the same paper at the following address:
  • asked a question related to Algorithms
Question
25 answers
May I ask what is the current state-of-the-art for incorporating FEM with Machine Learning algorithm? My main questions, to be specific, are:
  1. Is it a sensible thing to do? Is there an actual need for this? 
  2. What would be the main challenges? 
  3. What have people tried in the past? 
Relevant answer
Answer
Very good questions!
Here is my response based on my experience in developing state-of-the-art FE schemes for solid mechanics, fluid mechanics and multiphysics.
1.) Is it a sensible thing to do? Is there an actual need for this?
No. I don't think so.
You are one of the few who asked this question. ML for FEM, or for that PDEs altogether, is just part of the ongoing craze about ML/AI.
2.) What would be the main challenges?
ML is nothing but curve/surface fitting. Nothing new mathematically. Difficulties are even worse than what we face with the least-squares finite element method because of poorly conditioned matrices. No one talks about this because many don't even know such issues exist. Just GIGO.
The main challenge is getting access to a huge amount of computing power, especially GPUs. Generating the data is not an issue since it is done by running a lot of direct numerical simulations.
3.) What have people tried in the past?
Some tried and are still trying, but mostly nonsense if you know FEM. Gets published because it is the "trend".
No one considers the cost incurred in training the models in the discussion.
More or less generates 1000s of data sets to train a model which will subsequently be used for 10s of simulations, wasting about 90% of resources.
I have not seen anyone demonstrating the applicability of such ML models for changes in geometries (addition/removal of holes, fillets etc.), topologies (solid/solid contact and fracture), mesh (coarsening/refining, different element shapes), and constitutive models. Most probably due to obvious reasons.
  • asked a question related to Algorithms
Question
4 answers
There are PIDs but usually only the Proportional part of the PID algorithm is usually used
Mapping systems, as used in diesel engines
But make a several layer PIDs is difficult.
Map based systems (as example used in turbines or diesel engines) needs a lot of testing and works usually with new machines in controlled conditions
It would be better using an algorithm that adapt and slow increases or decreases control signal in order to obtain maximum performance.
Also some algorithm should advise of modifications out of expected values to advise about problems, making an efficient diagnosys of the system
I should need to use this kind of algorithm to control my simulations to reduce number of simulations but also to control my Miranda and Fusion Reactors
Perhaps some of the algorithms can be: Neural Networks, MultiLayer Perceptrons (MLP) and Radial Basis Function (RBF) networks. Also the new Support Vector Regression (SVR)
Relevant answer
Answer
I made an algorithm that theoretically reaches the end solution in the minimum time:
1. Set delta = (max-min)/2 for every parameter
2. The algorithm varied from center value to +1/2 delta and -1/2 delta one of the parameters and see which result is the best, then center that value to that
3. The same with all the parameter
4. Divide delta/2
5. Goto 2 until delta=minimum
The problem is that vary so much one parameter in one REAL machine it would be broken or stopped
Perhaps it is a better solution to go from the center, use delta=minimum delta, and going up multiply by 2 every time, then begin to divide again by 2 when entering in a second condition (to be defined)
  • asked a question related to Algorithms
Question
14 answers
I would be grateful for suggestions to solve the following problem.
The task is to fit a mechanistically-motivated nonlinear mathematical model (4-6 parameters, depending on version of assumptions used in the model) to a relatively small and noisy data set (35 observations, some likely to be outliers) with a continuous numerical response variable. The model formula contains integrals that cannot be solved analytically, only numerically. My questions are:
1. What optimization algorithms (probably with stochasticity) would be useful in this case to estimate the parameters?
2. Are there reasonable options for the function to be optimized except sum of squared errors?
Relevant answer
Answer
Dear;
You can solve the integrals manually or by a software.
Regards
  • asked a question related to Algorithms
Question
9 answers
Trial division: To test if n is prime, one can check for every k≤ sqrt (n) if k divides n. If no divisor is found, then n is prime. 
Or 6k+/-1
Relevant answer
Primality test and easy factorization
This is an algorithm that test if one number is prime and if the number it´s not prime returns the biggest factor of that number.
  • asked a question related to Algorithms
Question
6 answers
in pso algorithm for solving an issue with chenging the sequents of element of representative solution ,all elements will chenge and because of elemants dependence to each other, this algorithm didn't go to a
converge optimized answer
how can we solve this problem and converge the algorithm?
-----
representative solution Included two parts. 
in  first part Included permutation of integers that coded  continuous 
between 0 and 1 
and in fitness function will be decoded to integers 
and second part included continuous  number between 0 and 1 
direction of second part is depended to values of the first part
----
there is no problem with duplicate answer in permutation because of considering fixer procedure 
Relevant answer
Answer
Dear;
With the increasing demands in solving larger dimensional problems, it is necessary to have efficient algorithm. Efforts were put towards increasing the efficiency of the algorithms. This paper presents a new approach of particle swarm optimization with cooperative coevolution.
Article :
A New Particle Swarm Optimizer with Cooperative Coevolution for Large Scale Optimization
Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014.
Regards
  • asked a question related to Algorithms
Question
4 answers
For Example, I have a South Carolina map comprising of 5833 grid points as shown below in the picture. How do I interpolate to get data for the unsampled points which are not present in 5833 points but within the South Carolina(red region in the picture) region? Which interpolation technique is best for a South Carolina region of 5833 grid points?
Relevant answer
Answer
Dear Vishnu,
in which format is the data, which you would like to interpolate, available: NetCDF, ASCII-Text, Excel, ... ?
  • asked a question related to Algorithms
Question
13 answers
In an online website, some users may create multiple fake users to promote (like/comment) on their own comments/posts. For example, in Instagram to make their comment be seen at the top of the list of comments.
This action is called Sockpuppetry. https://en.wikipedia.org/wiki/Sockpuppet_(Internet)
What are some general algorithms in unsupervised learning to detect these users/behaviors?
Relevant answer
Answer
In my experience in suggest for this problem to use supervised by using the Artificial Neural Networks, we can you find different architecture have the background or inspiration from comportment of human, or example NN, CNN, RNN... etc.
I hope that be Claire for you. @ Issa Annamoradnejad
  • asked a question related to Algorithms
Question
8 answers
I want to learn more about the time complexity and big-O notation of the algorithm
What are the trusted books and resources I can learn from?
Relevant answer
Answer
I highly recommend Introduction to algorithms. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms. MIT press.
  • asked a question related to Algorithms
Question
1 answer
In an AEC configuration as shown in the Fig.1, let us consider u(n), the far-end speech signal and x(n), the near-end speech signal. The desired signal is d(n)=v(n)+x(n), where v(n) is the echo signal generated from the echo path impulse response. The purpose of an adaptive filter W is to find an echo estimate, y(n) which is then subtracted from the desired signal, d(n) to obtain e(n).
When I am implementing the APA adaptive algorithm for echo cancellation, I am observing leakage phenomenon which is explained as follows:
y(n) will contain a component proportional to x(n), that will be subtracted from the total desired signal. This phenomenon is in fact a leakage of the x(n) in y(n), through the error signal e(n); the result consists of an undesired attenuation of the near-end signal.
Because of this near-end leakage phenomenon, there is near-end signal suppression in the post-processing output.
I am handling the double-talk using robust algorithms without the need for DTD.
I kindly request to suggest me how to avoid near-end signal leakage in the adaptive filter output y(n)?
Relevant answer
Answer
To avoid this problem, there are two methods, using DTD, or by modifying your algorithm in its updates by incorporating the statistics of near-end signal to control the output, as done by many authors (such as Variable Step Size).
  • asked a question related to Algorithms
Question
6 answers
Can anyone suggest a data compression algorithm to compress and regenerate data from sensors(eg. accelerometer-it consists of time & acceleration) that are used to obtain structural vibration response?I have tried using PCA but i am unable to regenerate my data.Kindly suggest a suitable method or some other algorithm to go with PCA?
Relevant answer
Answer
Saranya Bharathi I am wandering , which algorithm did you use ?
  • asked a question related to Algorithms
Question
7 answers
What will be a suitable model for solving the regression problem? Is there any hybrid algorithm or new model/framework exist to solve this problem in deep learning. How much deep learning is promising for regression tasks.
Relevant answer
Answer
It is very similar to the use of deep learning for the classification problem. Just you use different layers at the end of the network. e.g. in CNN instead of a softmax layer and cross-entropy loss, you can use a regression layer and MSE loss, etc.
It will be as useful as deep classification networks. But it depends on your data and problem. RNNs (especially LSTMs) are useful for time-series and sequential data such as speech, music, and other audio signals, EEG and ECG signals, stock market data, weather forecasting data, etc.
If you are using MATLAB, here are two examples (CNN and LSTM) for the implementation:
  • asked a question related to Algorithms
Question
5 answers
I need implement the epsilon constraint method for solve multi objective optimization problems, but I don´t know how to choose each epsilon interval and neither when to terminate the algorithm, that is to say the stopping criteria.
Relevant answer
Answer
Hi my dear friend
Good day!
I think the below book will help you a lot to provide relevant codes.
Messac, A. (2015). Optimization in practice with MATLAB®: for engineering students and professionals. Cambridge University Press.
best regards.
Saeed Rezaeian-Marjani
  • asked a question related to Algorithms
Question
3 answers
It seems that the quadprog function of MATLAB, the (conventional) interior-point algorithm, is not fully exploiting the sparsity and structure of the sparse QP formulation based on my results.
In Model Predictive Control, the computational complexity should scale linearly with the prediction horizon N. However, results show that the complexity scales quadratically with the prediction horizon N.
What can be possible explanations?
Relevant answer
Answer
Thanks for your answer.
I gave the wrong information. Quadprog is not exploiting the sparsity and structure of the sparse QP formulation at all. So, the computation complexity scales cubically with N.
The barrier algorithm of Gurobi was not fully exploiting the sparsity and structure of the sparse QP formulation. So, the computation complexity scales cubically with N. Do you have some documentation about this? At the Gurobi website, I could not find anything relevant to answer this question.
Could you maybe also react to my last topic about why interior point algorithm of Gurobi and quadprog give same optimized values x and corresponding objective function value , but the first-order solver GPAD gives the same optimized values x, but another objective function value which is a factor 10 bigger?
Your answers are always really helpful.
Regards
  • asked a question related to Algorithms
Question
13 answers
When doing machine learning, do we normally use several algorithms for comparison? For example, if the RMSE of SVM is 0.1, how do I come up with the conclusion that this model performed well? Just based on saying RMSE value is low, so the result is good? But if there is no comparison, how do I say it is low?
Or shall I include other algorithms e.g. random forest etc to do a comparison of the value? I intended to use only SVM regression, but now I am a bit stuck at the interpretation of the results.. Thank you in advance!
Relevant answer
Answer
Sorry to differ, even if one's contribution is a unique method, it is still better that he/she compares different regression models to complete the picture (recommendation of the best regression model that can accompany his/her method, cannot be done without comparison).
  • asked a question related to Algorithms
Question
5 answers
Dear all,
I have a watershed and wish to randomly split the watershed into different 'artificial' farms, and the farm area should follow an exponential distribution as found empirically from other studies. The 'artificial' farm could be rectangular or any other shape.
Is there any way to do this in GIS or other software? Any method achieved through shapefile or raster can be accepted.
Thank you!
Pan
Relevant answer
Answer
An interesting sampling problem. I guess the bottom line is the brute force MCS with Delaunay triangulation. Adding some regularization to make it an optimization problem may work better. Sorry for not being very helpful.
  • asked a question related to Algorithms
Question
38 answers
Hell, everyone. I am a student of electrical engineering and my research field is related to the optimization of a power system.
I know that the algorithm that we should choose depends on our problem but there are lots of heuristics, metaheuristic algorithms available to choose from. It will also take some time to understand a specific algorithm and after that maybe we came to know that the chosen algorithm was not the best for my problem. So as per my problem how can I choose the best algorithm?
Is there any simple solution available that can save my time as well?
Thank you for your precious time.
Relevant answer
Answer
As most people have indicated the best solution depends on the 'surface' you are optimising and the number of dimensions. If you have a large number of dimensions and a smooth surface then traditional methods that use derivatives (or approximations to derivatives) work well such as the Quasi-Newton Method. If there are a small number of dimensions and the surface is fairly sensible but noisy then the Nelder and Mead Simplex works well. For higher dimensions with noise but still farily sensible (hill like) then simulated annealing works. The surfaces which are discontinuous and mis-leading are best addressed with the more modern heuristic techniques such as evolutionary algorithms. If you are trying to find a pareto-surface then use a multi-objective genetic algorithm. So the key things are how many dimensions, is the surface reasonably smooth (reliable derivatives), do you want a pareto surface or can you run multiple single criterion optimisations. The other questions is, do you need to know the optimum or do you just want a very good result. There are often good algorithms for approximations to the best result, for example using a simplified objective function which can be found much faster to get a good rough solution which may be the starting point for a high fidelity solution. Sorry if this indicates it is complex, it really does depend on the solution space. Do not forget traditonal mathematical methods used in Operational Research as well. Good Luck!
  • asked a question related to Algorithms
Question
4 answers
i want to execute apriori algorithm of association rule in datamining through MATLAB
Relevant answer
Answer
I work in the same topic and I use
it works with small dataset but with large database like mushroom or BMS1 or other datasets it takes time.
who has solution about that?@ Shafagat Mahmudova
  • asked a question related to Algorithms
Question
2 answers
I'm trying to implement a fall detection algortihm written in C in a Zybo board.
I am using Vivado HLS. 
I don't know how to start even if already did the tutorials related to Zynq7000.
Thank you for any help.
Relevant answer
Answer
V. Carletti, A. Greco, A. Saggese, and M. Vento, "A smartphone-based system for detecting falls using anomaly detection," in ICIAP 2017 Proceedings, 2017. [Online] Available: https://link.springer.com/chapter/10.1007/978-3-319-68548-9_45
V. Carletti, A. Greco, V. Vigilante, and M. Vento, "A wearable embedded system for detecting accidents while running," in VISAPP 2018 - Proceedings of the International Conference on Computer Vision Theory and Applications, 2018. [Online] Available: https://www.scitepress.org/Papers/2018/66128/66128.pdf
  • asked a question related to Algorithms
Question
2 answers
I am currently working on a binary classification of EEG recordings and I came across with CSP.
As far as I understand, CSP allows you to choose the best features by maximizing the variance between two classes, which is perfect for what I'm doing. Here follow the details and questions:
- I have N trials per subject, from which half belongs to class A and the other half to class B.
- Let's say I want to apply CSP to this subject trials. From what I understood, I should apply CSP to all my trials (please correct me if I'm wrong here). Do I arbitrarily choose which trial from class A to compare with one from class B? Is the order by which I do it, indifferent?
- After CSP I should get the projection matrix (commonly wrote at W), from which I can obtain the transformed signal and compute the variances (part of which will be my features). Why does the computation of the variance is transformed into a log function in most papers?
Thank you very much
Relevant answer
Answer
The projection matrix W is essentially the eigen-decomposition of the covariance of the data matrix X. Alternatively this can also be done using singular value decomposition (SVD) which is a more efficient way of handling high dimensional data as this avoids calculation of the large matrix XT.X. Apply SVD to the entire dataset and then make a plot of the cumulative sum of the SVs against the number of SVs. This can help in selecting the proper number of SVs from the plot which accounts for say 95% of the variances in the data. Log-scale specifies relative changes, while linear-scale specifies absolute changes.
  • asked a question related to Algorithms
Question
4 answers
For lightweight encryption.
Relevant answer
Answer
Respected Sir,
Please send me the source code of your project.
  • asked a question related to Algorithms
Question
3 answers
I am working on optimizing well placement in the condensate reservoir model using an algorithm. Any kind of code example will be appreciated.
Relevant answer
  • asked a question related to Algorithms
Question
3 answers
Hello,
I am working with a convex hull in n-dimensions (n>3) and I am having problems generating points on the convex hull surface. Ideally, I would like the points to be uniformly distributed or almost uniformly distributed. I am mostly looking for something simple to understand and implement.
(I am using scipy.spatial.ConvexHull python library)
Any help would be greatly appreciated :)
edit: Thank you very much for the answers already given.:) I have reformulated the question hoping to remove any confusion.
Thanks,
Noemie
Relevant answer
Answer
Dear Noemie,
I don't think that the question can really be answered without posing it in a more precise manner. First and foremost would be your expectations about the meaning of a uniform distribution of points on a complex hull...
However, if you are interested in convex hulls that have been generated from some process (and aren't easily relatable functions) then my thought would be to adapt one of the convex hull finding algorithms and take a series of random walks along the boundaries from some set of known initial points. These could be made to approximate whichever expected distances and variances you decide you are looking for.
An alternate approach would be to "walk" the boundary according to a n space grid again using one of the boundary finding algorithms and randomly select grid points. These could then be perturbed to correspond with your planned distance metric and distribution.
Good luck
  • asked a question related to Algorithms
Question
6 answers
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type, byte[] data, int ITERS){ double results[] = new double[ITERS]; DETABECipher cipher = new DETABECipher(); long startTime, endTime; List list = null; for (int i = 0; i < ITERS; i++){ startTime = System.nanoTime(); list = cipher.encrypt(data, secLevel,type, policy); endTime = System.nanoTime(); results[i] = (double)(endTime - startTime)/1000000000.0; } return list; } public List encrypt(byte abyte0[], int i, String s, String s1) { AccessTree accesstree = new AccessTree(s1); if(!accesstree.isValid()) { System.exit(0); } PublicKey publickey = new PublicKey(i, s); if(publickey == null) { System.exit(0); } AESCipher.genSymmetricKey(i); timing[0] = AESCipher.timing[0]; if(AESCipher.key == null) { System.exit(0); } byte abyte1[] = AESCipher.encrypt(abyte0); ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree); timing[1] = AESCipher.timing[1]; timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5]; long l = System.nanoTime(); LinkedList linkedlist = new LinkedList(); linkedlist.add(abyte1); linkedlist.add(AESCipher.iv); linkedlist.add(abeciphertext.toBytes()); linkedlist.add(new Integer(i)); linkedlist.add(s); long l1 = System.nanoTime(); timing[3] = (double)(l1 - l) / 1000000000D; return linkedlist; } public static byte[] encrypt(byte[] paramArrayOfByte) { if (key == null) { return null; } byte[] arrayOfByte = null; try { long l1 = System.nanoTime(); cipher.init(1, skey); arrayOfByte = cipher.doFinal(paramArrayOfByte); long l2 = System.nanoTime(); timing[1] = ((l2 - l1) / 1.0E9D); iv = cipher.getIV(); } catch (Exception localException) { System.out.println("AES MODULE: EXCEPTION"); localException.printStackTrace(); System.out.println("---------------------------"); } return arrayOfByte; } public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[] paramArrayOfByte, AccessTree paramAccessTree) { Pairing localPairing = paramPublicKey.e; Element localElement1 = localPairing.getGT().newElement(); long l1 = System.nanoTime(); localElement1.setFromBytes(paramArrayOfByte); long l2 = System.nanoTime(); timing[3] = ((l2 - l1) / 1.0E9D); l1 = System.nanoTime(); Element localElement2 = localPairing.getZr().newElement().setToRandom(); Element localElement3 = localPairing.getGT().newElement(); localElement3 = paramPublicKey.g_hat_alpha.duplicate(); localElement3.powZn(localElement2); localElement3.mul(localElement1); Element localElement4 = localPairing.getG1().newElement(); localElement4 = paramPublicKey.h.duplicate(); localElement4.powZn(localElement2); l2 = System.nanoTime(); timing[4] = ((l2 - l1) / 1.0E9D); ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree); ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded(); localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey); timing[5] = ShamirDistributionThreaded.timing; return localABECiphertext; } } public ABECiphertext(Element element, Element element1, AccessTree accesstree) { c = element; cp = element1; cipherStructure = new HashMap(); tree = accesstree; } public void execute(AccessTree accesstree, Element element, ABECiphertext abeciphertext, PublicKey publickey) { pairing = publickey.e; ct = abeciphertext; PK = publickey; countDownLatch = new CountDownLatch(accesstree.numAtributes); timing = 0.0D; double d = System.nanoTime(); Thread thread = new Thread(new Distribute(abeciphertext, accesstree.root, element)); thread.start(); try { countDownLatch.await(); long l = System.nanoTime(); timing = ((double)l - d) / 1000000000D; synchronized(mutex) { } } catch(Exception exception) { exception.printStackTrace(); } }
Relevant answer
Answer
That's a hardware issue and nothing else. Best, T.T.
  • asked a question related to Algorithms
Question
16 answers
In the ε-constraint method, one objective will be used as the objective function, and the remaining objectives will be used as constraints using the epsilon value as the bound. In this case:
- Do we need to apply penalty method to handle the constraint?
- How to select the best solution?
- How to get the final Pareto set?
Relevant answer
Answer
You can perform a web search on "epsilon constrained method multi objective" which provides quite many hits on Google scholar - and see what others have done in the near past. It is a quite popular tool.
You may look at the useful links below:
I hope it can help u!
  • asked a question related to Algorithms
Question
5 answers
Dear colleagues ,
If you consider a "complete dense" multivariate polynomial, Is there exists a Horner factorization scheme like for a classical polynomial ?
(by "complete-dense", I mean all the possible monom up to a given global order, the number of monom being given by the known formula with combination C_r^(n+r) if am correct).
Thx for answers
Relevant answer
Answer
Hope you find the following article is useful
Best regards
  • asked a question related to Algorithms
Question
2 answers
Intuitively, Maximum Likelihood inference on high frequency data should be slow, because of the large data set size. I was wondering if anyone has experience with slow inference, I can make optimization algorithms to speed up the infrence then.
I tried this with Yacine Ait Sahalia work on estimating diffusion models, using his code, which (Unfortunately!) is pretty fast, even for large data set. Now does any one know any large slow high frequency financial econometric problem do let me know,
Relevant answer
Answer
For large samples exact maximum likelihood can be approached reasonably well by faster estimation methods. But I do not understand why you want slow methods. As far as I know, Ait Sahalia code is good. Why do you say "(Unfortunately!)" ?
  • asked a question related to Algorithms
Question
4 answers
i want to study RPL in mWSN i am using NS2.35 Wsnet simulators, and cooja
can i find some algoritms source code in improving RPL?
Relevant answer
Answer
RPL is a IPv6 based Routing Protocol for Low-Power and Lossy Networks. A Low-Power and Lossy Networks (LLNs) are a class of network. Here both routers and their interconnects are constrained devices.(i.e) processing power, memory, and energy consumption.RPL routing based on following principle: Destination Oriented Acyclic Graphs or DODAGs. step 1 : Open the Contiki OS with Vmware worksation. An login into Contiki user password: user . step2 : Now open the terminal in contiki desktop and make the right directories to run the cooja simulator tools. In terminal,
Go to the Directory : cd “/home/user/contiki/tools/cooja” ----> Press Enter
Give Command in terminal : ant run ------> Press Enter
  After successful execution of above command. make file will build automatically and then Contiki Cooja Network simulator application tool will appear. It’s a blue color terminal.
  • asked a question related to Algorithms
Question
6 answers
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
Relevant answer
Answer
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
  • asked a question related to Algorithms
Question
2 answers
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
Relevant answer
Answer
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
  • asked a question related to Algorithms
Question
17 answers
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
Relevant answer
Answer
All are approaches that exploits the computational intelligence paradigm. Machine learning is refered to data analitics. Evolutionary computation deal with optimization problems.
  • asked a question related to Algorithms
Question
2 answers
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
Relevant answer
Answer
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
  • asked a question related to Algorithms
Question
2 answers
Dijkstra's algorithm performs the best sequentially on a single CPU core. Bellman-Ford implementations and variants running on the GPU outperform this sequential Dijkstra case, as well as parallel Delta-Stepping implementations on multicores, by several orders of magnitude for most graphs. However, there exist graphs (such as road-networks) that perform well only when Dijkstra's algorithm is used. Therefore, which implementation and algorithm should be used for generic cases?
Relevant answer
Answer
Maleki et al. achieved improvements over delta-stepping:
Saeed Maleki, Donald Nguyen, Andrew Lenharth, María Garzarán, David Padua, and Keshav Pingali. 2016. DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem. In Proc.\ 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 32, DOI: https://doi.org/10.1145/2925426.2926287.
At the end of the Abstract they write:
"Our results show that DSMR is faster than the best previous algorithm, parallel [Delta]-Stepping, by up-to 7.38x".
Page 9, col 1, line -3:
"Machines: Two experimental machines were used for the evaluation: a shared-memory machine with 40 cores (4 10-core Intel(R) Xeon^TM E7-4860) and 128GB of memory; the distributed[-]memory machine Mira, a supercomputer at Argonne National Lab. Mira has 49152 nodes and each node has 16 cores (PowerPC A2) with 16GB of memory."
Best wishes,
Frank
  • asked a question related to Algorithms
Question
1 answer
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
As an experiment, I created a small website, which searches for Wikipedia entries that are associated with a certain term. The results are only slightly related and should offer serendipitous encounters.
Feel free to try it and comment your thoughts on it! I'm happy for any feedback.
Thank you
Michael
Relevant answer
Answer
nice thinking
  • asked a question related to Algorithms
Question
5 answers
Intel's SGX extensions create isolated application enclaves, which disallow information leakage and unverified access to private data. However, SGX is now known to be broken as some works have leaked data on real hardware. What do such works exploit to break SGX's security invariants?
  • asked a question related to Algorithms
Question
4 answers
Dijkstra's algorithms performs well sequentially. However, applications require even better parallel performance because of real-time constraints. Implementations such as SprayList and Relaxed Queues allow parallelism on priority queue operations in Dijkstra's algorithm, with various performance vs accuracy tradeoffs. Which of these algorithms is the best in terms of raw parallel performance?
Relevant answer
  • asked a question related to Algorithms
Question
33 answers
Hi, I have little experience with Genetic algorithm previously.
Currently I am trying to use GA for some scheduling where I have some events and rooms which must be scheduled for these event each event has different time requirements and there are some constraints on availability of rooms.
But I want to know are there any other alternatives for GA since GA is a little random and slow process. So are their any other techniques which can replace GA.
Thanks in advance.
Relevant answer
Answer
You may try Backtracking Search Optimization Algorithm.
  • asked a question related to Algorithms
Question
5 answers
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Relevant answer
Answer
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
  • asked a question related to Algorithms
Question
3 answers
I need The Goldstein 2D branch cut algorithm in MATLAB. Does any one have a working version ?  I need to compare it with another algorithm.
  • asked a question related to Algorithms
Question
9 answers
Hello!
Many authors of books on design and algorithms (Weapons of Math Destruction, The Filter Bubble, etc) have claimed that in order to serve the human mind better, algorithms might need to work more irrational.
My name is Michael and I'm an Interaction Designer from Switzerland. I am currently working on my Bachelors Thesis, which deals with Serendipity and Algorithms. How can algorithms work less rational, and help us to come across more serendipitous encounters!
I was wondering wether any of you are familiar with some sort of an irrational algorithm. Does this exist? Let me know if you know something in this field, or what you think about it, anything helps!
Thank you
Michael
Relevant answer
Answer
Dear Mi Sc ,
Simply put, serendipity is when the connection between things, leads to something positive.. So if serendipity is simply a series of positive connections that can be engineered, then serendipity can be calculated. In fact, I would go as far as saying that serendipity is an algorithm.
Regards,
Shafagat
  • asked a question related to Algorithms
Question
5 answers
Synchronization and memory costs are becoming humongous bottlenecks in today's architectures. However, algorithm complexities assume these operations as constant, which are done in O(1) time. What are your opinions in this regard? Are these good assumptions in today's world? Which algorithm complexity models assume higher costs for synchronization and memory operations?
Relevant answer
Answer
Just adding to what was previously said. In specific purpose computers (embedded systems) it is posible to have hardware-based operations that are not necessarily O(1) (or any other complexity order for that matter) like their general purpose counterparts. Such is the case in certain devices designed for cryptology applications.
  • asked a question related to Algorithms
Question
3 answers
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
Relevant answer
Answer
Level-Synchronous Parallel Breadth-First Search Algorithms
  • asked a question related to Algorithms
Question
4 answers
Graph algorithms such as BFS and SSSP (Bellman-Ford or Dijkstra's algorithm) generally exhibit a lack of locality. A vertex at the start of the graph may want to update an edge that exists in a farther part of the graph. This is a problem in graphs whose memory requirements far exceed those available in the machine's DRAM. How must the graph be streamed into the machine in this case? What are the consequences for a parallel multicore in such cases where access latency and core utilization are of utmost importance?
Relevant answer
Answer
You could combine clusters of vertices to super-vertices, find a route through them and delete all vertices of the original graph whose vertices are not contained in a visited super-vertex. Then you proceed with a smaller graph and smaller super-vertices.
Regards,
Joachim
  • asked a question related to Algorithms
Question
46 answers
Or is it just an effective name to call adaptive and self-learning programmed algorithms?
Relevant answer
Is a good question. It seems like the "intelligence" here comes from the human that created the artificial element and then lets it control the rest of his life.
  • asked a question related to Algorithms
Question
5 answers
In previous versions of opencv , there was an option to extract specific number of keypoints according to desire like  
kp, desc = cv2.sift(150).detectAndCompute(gray_img, None)
But as in opencv 3.1 SIFT and other "non free" algorithms are moved to xfeatures2d ,so the function is giving error . Kindly tell me how can i set limit on the number of keypoints to be extracted using opencv 3.1. Thanks !
Relevant answer
Answer
n_kp = 5
cv2.xfeatures2d.SIFT_create(n_kp)
  • asked a question related to Algorithms
Question
16 answers
I am looking for a public dataset for e-learning that I can use for testing performance and accuracy of Reccomender Systems algorithms. Anyone with an idea where I can find a public dataset?
Relevant answer
Answer
Coursera MOOC's
  • asked a question related to Algorithms
Question
7 answers
I have been trying to implement BSA in Python and looks like the algorithm is pretty confusing. Has anyone implemented this algorithm? Any language.
Thanks.
Regards,
Akshay.
Update:
I have implemented the BSA algorithm here -> https://github.com/akshaybabloo/Spikes. If anyone needs it, please feel free to fork it. 
Relevant answer
Answer
Hi, the code is very resourceful. Can you also please upload singal recosntruction part (spike to analog signal)?
  • asked a question related to Algorithms
Question
4 answers
Some workloads or even inputs perform well on GPUs, while others perform well on multicores. How do we decide which machine to buy for a generic problem base for optimal performance? Cost is NOT taken as a factor here.
Relevant answer
Answer
Dear Ahmad,
Besides cost, there are many factors you have to consider.
First, you are asking which hardware to use for a given algorithm or implementation, which I think is not the right question because a parallel algorithm is developed taking into consideration the hardware.
I'm not going to take the same approach if my solution is for a cluster using MPI, a multicore processor using OpenMP, or a manycore processor using CUDA.
So, before deciding the hardware and the algorithm you have to analyze the problem (you may be interested in looking for Foster's methodology). How can it be decomposed (many independent tasks, few coarse grain tasks, etc)? Is it regular (in its memory access pattern, in the operations done on data)? What's the size of the data (can it be fitted in a GPU memory or in the main memory)? Is it memory bound or computation intensive?
After this process, you can take the decision about the hardware and after that you can start developing the program.
Finally, when you have a functional program, you should start with a performance tuning process for maximizing the performance indexes of your interest (speedup, efficiency, power consumption, throughput, etc.).
Best!
  • asked a question related to Algorithms
Question
12 answers
I want to make a small software using C++, the input of this software will be source code written in text file. I want to identify the loops and the if statements in the source code and do some operation on them. the output will be the same source code with additional comments. My question is there any libraries in C++ that can help me identify these places and deal with them as a block?
Thank you
Relevant answer
Answer
LLVM and Clang are pretty heavy pieces of software. If you need a simple syntax analyzer, try GNU Bison. https://en.m.wikipedia.org/wiki/GNU_Bison
  • asked a question related to Algorithms
Question
13 answers
Hi,
As per the definition of logic obfuscation, obfuscated circuit stays in obfuscated mode upon global reset (i.e. initial state) and generates incorrect output; upon receiving correct initialization sequence it enters into functional mode and generates intended outputs.
This is fine with respect to the design that does not connected with any further critical systems. If at all, the obfuscated logic needs to be connected to further safety critical systems, won't incorrect value generated in obfuscated mode affects the critical systems??
In such case, how to apply logic obfuscation??
Thanks in advance.
Relevant answer
Answer
You can read the literature yourself and come up with your own conclusions. It would do you good, you would sound less like a non-expert rambling about something you have very little clue about. I am done here.
  • asked a question related to Algorithms
Question
6 answers
- Using Mixed integer liner programming.
- I need to determine the optimal timing to invest(maximizing Net Present Value) in district heating power plants and at the same time minimizaing Carbon emissions.
- Main constraint: coverage of a given heat demand.
- Investment decision through mixed integer linear programming.
- Investment optimization as extension of unit commitment. (schedule optimization)
- Deterministic approach.
Relevant answer
Answer
Thanks a lot Mr. Temitayo Bankole !
  • asked a question related to Algorithms
Question
7 answers
I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :
Vnew= Vold+(Current-Best) * f;
S= 1/ ( 1+Math.exp(-Vnew));
X(t+1) = { 1  S>Rnd  , 0   Rnd>=S)
the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew
Relevant answer
Answer
Unless you are doing just an "exercise", I discourage you from trying "new" metaheuristics for knapsack. Besides being a widely studied problem, there are very good knapsack specific algorithms. Check David Pissinger's webpage for codes and test instances generators.
  • asked a question related to Algorithms
Question
8 answers
Can anyone point me to an algorithm or a model that can detect body movement from the accelerometer data on a wristband.
Relevant answer
Answer
I suspect that your question is too general. My wrist movement is somewhat independent of my body movement. For example, I may be playing a guitar and so my wrist will be moving in a certain manner. However, while playing the guitar I may be either, sitting down, standing up, walking or dancing to the music that I am playing. From this you can see that my wrist movements do not define my body movements and so one can not be inferred from the other.
Another situation where this "confusion" occurs can be manufactured if you have a chair that rotates. You can then sit in the chair with your wrist still while a friend quickly rotates the chair and (if you don't get dizzy) then if you look at the reading from your wrist sensor you will see that it seems to indicate that the wrist is moving (and it is but so is your whole body) and so you can not assume that a movement registered at the wrist only comes from the wrist.
  • asked a question related to Algorithms
Question
13 answers
By analysis we mean that we are studying existing algos seeing their features applications, performance analysis, performance measurement, studying their complexity and improving them.
Relevant answer
Answer
time and space consumption during program execution
  • asked a question related to Algorithms
Question
7 answers
Total variation problem is an old problem and there are multiple algorithms for solving this problem. 
What is the best algorithm for solving TV problem? What is the the advantage of the algorithm? What is the complexity of the algorithm? Thanks.
Relevant answer
Answer
You may try ADMM (Alternating Direction Method of Multipliers).
  • asked a question related to Algorithms
Question
3 answers
Busco algun colega que pueda apoyar con la revisión de un manuscrito
Relevant answer
Answer
thanks simone this is about a review for the colombian EIA journal
  • asked a question related to Algorithms
Question
5 answers
Cuckoo search clustering algorithm.
Relevant answer
Answer
This might be helpful
function [z] = levy(n,m,beta) % Input parameters % n -> Number of steps % m -> Number of Dimensions % beta -> Power law index % Note: 1 < beta < 2 % Output % z -> 'n' levy steps in 'm' dimension num = gamma(1+beta)*sin(pi*beta/2); % used for Numerator den = gamma((1+beta)/2)*beta*2^((beta-1)/2); % used for Denominator sigma_u = (num/den)^(1/beta);% Standard deviation u = random('Normal',0,sigma_u^2,n,m); v = random('Normal',0,1,n,m); z = u./(abs(v).^(1/beta)); end
  • asked a question related to Algorithms
Question
7 answers
I have recently been working on applying ML algorithms for time series prediction (forecasting demand of item). I am a bit confused somehow. So, my questions is as follows
1.Does the performance of these algorithms usually depends of the nature of the dataset i.e descriptive characteristics like mean, max, min etc. The machine learning algorithms i applied includes, Gaussian processes, backpropagation neural network and support vector machine.
2.I found Gaussian processes performs better than the rest but i am not too convinced with my results since the RMSE is still high for all algorithms i tried. Below is a summary for data i have.
item_1
Min = 100
Max = 40200
Mean = 11849
RMSE = 11756.5
I used WEKA forecasting package i have attached a time series plot
Relevant answer
Answer
The quality of prediction depends on the relationship between the attributes and your label. You may want to check R^2 statistic. This will tell you how much of the variation is explained by the attributes. Usually, some sort of feature engineering is required along with the use of some kernel, when there isn't a linear relationship.
  • asked a question related to Algorithms
Question
3 answers
There are so many algorithms useful in the field of data science.
The attached picture presents a summary of some of the most famous algorithms in this field.
I am aware that the best way to learn something is by doing it during a project. But could any body tell that before having any project in mind, which algorithm is best to start with?
I appreciate any suggestion/thought.
Relevant answer
Answer
I recommend Artificial Neural Network (ANN)
  • asked a question related to Algorithms
Question
5 answers
The digital revolution promotes individual agency and imagination, innovation and new forms of production, entrepreneurship and knowledge. At the same time, it poses new threats and limitations for human rights, civil liberties, social justice and solidarity (e.g., big data oligarchy, digital feudalism, and generational conflicts). Which is the most proper ethical and political approach to digital technology and platform economy?
Relevant answer
Answer
Relative to any “ethical and political approach” to digital technology or in governance in general, I feel when humanity explores the constructal law, one day, a future generation of leaders may begin to realize that no man-made law or philosophy can change a physical law in nature; with such maturity, the application of the constructal law may help in ways to develop new social/political tools to enhance the evolution in governance, digital technology, and the economies throughout the world: