Science topic

Algorithm Analysis - Science topic

Explore the latest questions and answers in Algorithm Analysis, and find Algorithm Analysis experts.
Questions related to Algorithm Analysis
  • asked a question related to Algorithm Analysis
Question
5 answers
Hello everyone,
Could you recommend courses, papers, books or websites about algorithms that support missing values?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
In real world data, there are some instances where a particular element is absent because of various reasons, such as, corrupt data, failure to load the information, or incomplete extraction. Handling the missing values is one of the greatest challenges faced by analysts, because making the right decision on how to handle it generates robust data models. Let us look at different ways of imputing the missing values.
  • asked a question related to Algorithm Analysis
Question
10 answers
Dear colleagues,
I was wondering whether there are any ways or any softwares that I can use to analysis the muscle cross sectional area in my H&E histology images?
I tried to use ImageJ thresholding, unfortunately it does not work efficiently for me. Thus, I was wondering whether there are any currently established methods.
Thank you very much in advance.
Relevant answer
Answer
I also want to calculate muscle fibers CSA in H&E staining using imageJ. Thresholding cannot be used but manually it can be done and it will be time taking process. I want to know about how many and how the fibers are randomly selected in a sample field, how many sample fields will be needed and what should be the magnification?
  • asked a question related to Algorithm Analysis
Question
3 answers
Im looking for method to find the best path with more alternative path. I know djikstra to find shortest path but my research seems like find longest path and path shape is like a tree and i dont know the method is. Example my problem is we have financial manager, before be a financial manager, he must be an Assistant Financial Manager (A) or Assistant Budget manager (B). If Assistant Financial Manager got optimal performance score (4 point) and Assistant Budget Manager got potential score (3 point). And before be an assistant financial manager he must be a financial and accounting supervisor (C) and before be an assistant budget manager he must be a general affair supervisor (D). If financial and accounting supervisor got potential score (3 point) and general affair supervisor got potential score also (3 point) how we determine the best path with maximize performance score?. Imagine the path like Z - A(4) - C(3) also Z - B(3) - D(3).
Relevant answer
Answer
Optimal Design of Truss Structures with Frequency Constraints: A Comparative Study of DE, IDE, LSHADE, and CMAES Algorithms (Published2021)
  • asked a question related to Algorithm Analysis
Question
3 answers
If we some how transform a Binary Search Tree into a form where no node other than root may have both right and left child and the nodes the right sub-tree of the root may only have right child, and vice versa, such a configuration of BST is inherently sorted with its root being approximately in the middle (in case of nearly complete BST’s). To to this we need to do reverse rotations. Unlike AVL and red black trees, where roatations are done to make the tree balanced, we would do reversed rotations.
I would like to explain the pseudo code and logical implementation of the algorithm through the images in following pdf. The algorithm is to first sort the left subtree with respect to the root and then the right subtree. These two subparts will be opposite to each other, that is, left would interchange with right. For simplicity I have taken a BST with right subtree, with respect to root, sorted.
To improve the complexity as compared to tree sort we can augment the above algorithm. We can add a flag to each node where 0 stands for a normal node while 1 is when the node has non null right child, in the original unsorted BST. The nodes with flag 1 have an entry in a hash table with key being their pointers and the values being the right most node. For example node 23's pointer would map to 30.5's pointer. Then we would not have to traverse all the nodes in between for the iteration. If we have 23's pointer and 30.5's pointer we can do the required operation in O(1). This will bring down time complexity , as compared to tree sort.
Please review the algorithm and give suggestion if this algorithm is of any use and if I should write a research paper on it.
Relevant answer
Answer
This reminds me of the idea of quicksort where we take a pivot element and "move" all elements smaller to the left of the pivot element and all other elements to the right (and then do this recursively).
If we take the pivot element as root node of the tree we would get a somewhat similar result (they become more and more similar as quicksort advances).
Ok, but we can move stuff faster in the tree as we can move entire subtrees.
Two thoughts on that:
1. not 100% sure but one might find a worst case where each subtree have to be moved/sorted (I guess)?
2. quicksort works on a plain array, the tree must be "constructed". As inserting takes O(log n) (wikipedia) and we have to insert all n elements, we would get time O(n * log n) to insert them all. Quicksort also takes O(n * log n) time (worst case is n^2 but e.g. mergesort worst case is n * log n).
  • asked a question related to Algorithm Analysis
Question
6 answers
Is it possible that at "some moments" the calculation time of loop or algorithm increases more than the sampling time of the system ??
or it is completely wrong ?!!
Relevant answer
Answer
It depends on your case study and method, there are some methods for such situations, for instance if you are working in some MPC methods you can check for multiple horizon and consider second or third optimal control instead of first one. Following paper may help you about this problem.
  • asked a question related to Algorithm Analysis
Question
3 answers
Dear users.
I need to construct the plots in excel and R.I've 2000 data and these are large numbers (millions, because this is profit).
Could you tell me please,how I can to do it in excel and r with a goal to look years 200-2005 on the horizontal axis?
Excel:=series(;'63'!$A$1:$A$204;'63'!$D$1:$D$346;1)
R:
data <- read.table(file = "2.txt", header = TRUE)
head(data)
plot(data,type = "l", col = "red")
Doesn't work correctly
Thank you very much
Relevant answer
Answer
I agree with other colleagues about using "ggplot2" library for data visualization. These links maybe useful
  • asked a question related to Algorithm Analysis
Question
11 answers
I have 2 equation, and in each equation i have a coefficient of interest:
lets say:
eq1: Y=bX+2cX2+4
eq2: Y=aX2+2dX+12
Giving that the value of a and b are changing over time.
And I am aiming to record the values of a in list A and b in another list B
And from their behaviour i want to draw conclusion about the strength of these coefficients.
But i am a bit confused about how to draw such conclusion and what is the most representative way to monitor a and b behaviour change over time.
Or its better to monitor the increase or decrease of coefficient by summing the difference of recorded values over time.
I have more coefficients to be monitored, and they may have value or not. and my aim is to build meaningful classification that can categorise coefficients as useful or not.
Relevant answer
Answer
Please explore correlation and regression analysis. These may be helpful.
  • asked a question related to Algorithm Analysis
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Algorithm Analysis
Question
4 answers
Dear collegues.
I've 400 data (monthly) and I need to construct the forecast for each next month with using learning (training ) sample 50.
It means, I need to shift each time by one month with 50 elements.
train<-1:50, train<-2:51, train<-3:52,...,train<-351:400.
Could you tell me please,which function can I write in the program for automatic calculation?
Maybe, for() loop?
I am very grateful for your answers
Relevant answer
Answer
embed( data, 50 )
  • asked a question related to Algorithm Analysis
Question
6 answers
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
Mathematics helps AI scientists to solve challenging deep abstract problems using traditional methods and techniques known for hundreds of years. Math is needed for AI because computers see the world differently from humans. Where humans see an image, a computer will see a 2D- or 3D-matrix. With the help of mathematics, we can input these dimensions into a computer, and linear algebra is about processing new data sets.
Here you can find good sources for this:
  • asked a question related to Algorithm Analysis
Question
5 answers
I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.
Relevant answer
Dear Mirzakhmet, could you specify what concretely you have in mind when saying "to start from examples practicing deductive style…"?
When referring to studying deductive structures as a way of support the development of algorithmic skills, I have not considered, necessarily, any particular approach to study them. Probably, a good introduction could be presenting problems, where deductive approach is a recommended way for solving problems.
  • asked a question related to Algorithm Analysis
Question
12 answers
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
Thanks in advance.
Relevant answer
Answer
You have to explain time and space complexity of your algorithm
  • asked a question related to Algorithm Analysis
Question
4 answers
If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?
Any assistance or refer to scientific papers that can help me?
Relevant answer
Answer
Dear Mostafa Elsersy,
You can look at the following data:
What are algorithms used by websites?
An algorithm refers to the formula or process used to reach a certain outcome. In terms of online marketing, an algorithm is used by search engines to rank websites on search engine results pages (SERPs).
What is SEO algorithm?
As mentioned previously, the Google algorithm uses keywords as part of the ranking factors to determine search results. The best way to rank for your specific keywords is by doing SEO. SEO is essentially a way to tell search engines that a website or web page is about a particular topic.
How do you rank a website?
Follow these suggestions to improve your search engine optimization (SEO) and watch your website rise the ranks to the top of search-engine results.
  1. Publish Relevant, Authoritative Content. ...
  2. Update Your Content Regularly. ...
  3. Metadata. ...
  4. Have a link-worthy site. ...
  5. Use alt tags.
  • asked a question related to Algorithm Analysis
Question
63 answers
Any new innovations other than First-Come, First-Served (FCFS) / Shortest-Job-First (SJF)/ Round Robin (RR) or mix-development of those?
Relevant answer
Answer
A Process Scheduler is a program that distributes different tasks to the CPU based on different scheduling algorithms. Process scheduling can be done in six different ways.
There are three more algorithms besides First-Come, First-Served (FCFS) / Shortest-Job-First (SJF) / Round Robin (RR) or mix-development,
  • Priority Scheduling
  • Shortest Remaining Time
  • Multiple-Level Queues Scheduling
Priority Scheduling
Priority scheduling is a non-preemptive scheduling approach that is commonly employed in batch systems. Each process is assigned a priority. The most important procedure will be completed first, followed by the next most important step, and so on. Processes with the same priority are executed on a first-come, first-served basis. Memory requirements, time restrictions, or any other resource constraint can all influence prioritization.
Shortest Remaining Time
The SJN algorithm's preemptive version is the least remaining time (SRT).
The processor is allocated to the task that is closest to completion, but it may be surpassed by a newer ready project with a lower completion time. It is impossible to implement in interactive systems when the required CPU time is uncertain. It's commonly utilized in batch settings when short operations need to be prioritized.
Multiple-Level Queues Scheduling
Multiple-level queues aren't a viable scheduling option on their own. Using existing algorithms, they aggregate and schedule work of comparable quality. Each queue, for example, can have its own scheduling algorithms and be assigned a priority.
  • asked a question related to Algorithm Analysis
Question
5 answers
Wyoming recently recognized a new legal entity called a DAO LLC, or a decentralized algorithmic organization, which can be managed by an algorithm. The Wyoming law is largely silent on the legal requirements for the algorithm manager and its design, features, or policies. Any suggestion?
Relevant answer
Answer
Interesting question but it is away from my field
  • asked a question related to Algorithm Analysis
Question
3 answers
Hello dear researchers.
I run the siamfc ++ algorithm. It showed better results than the other algorithms but its accuracy is still not acceptable. One of my ideas is to run an object detection algorithm first and then draw a bunding box around the object I want to track.I think by doing this I have indirectly labeled the selected object. So the results and accuracy should be improved compared to the previous case. Do you think this is correct? Is there a better idea to improve the algorithm?
Thanks for your tips
Relevant answer
Answer
Your welcome Shahrzad Khalifeh Mehrjardi, Yes Precisely.
  • asked a question related to Algorithm Analysis
Question
4 answers
Why Particle Swarm Optimization works better for this classification problem?
Can anyone give me any strong reasons behind it?
Thanks in advance.
Relevant answer
Answer
Arash Mazidi PSO is also in various classification problems. I particularly use it for Phishing website datasets.
  • asked a question related to Algorithm Analysis
Question
9 answers
I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.
I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.
Relevant answer
Answer
Shahboz Sharifovich Qodirov Thanks for your suggestions.
  • asked a question related to Algorithm Analysis
Question
8 answers
Dear all,
I have written a paper about predicting the separation efficiency of hydro-cyclone separators by means of machine learning algorithms. To this end, I have collected 4000 single data comprised of 14 inputs (14 features of hydrocyclone separator), and one target (separation efficiency).
I have been asked by the journal to include an analysis into the computational complexity of applied algorithms (ANFIS, MLP, LSSVM, RBF), in terms of either run-time or big-o-notation. However, as I understand, the run time must increase commensurately by increment in the size of data (or size of inputs). But, strangely I have found out that by the increase in data size, the run-time for the respective models reduces dramatically. So, I am left puzzled how to report this, cuz as far as I know the big-o-notation diagrams cannot have a negative slope (the reduction of runtime by the increase in data size).
From an engineering point of view, this can be justified by bearing in mind that the algorithms can better recognize the target when higher characteristics of hydro cyclones (inputs) are fed. But, this remains as a paradox analyzing it through AI engineering.
I appreciate it if you help me out with this matter
thanks
Relevant answer
Answer
Thanks Mr Allen,
I understand your point,
So, in conclusion, big o notation is working once all the features are given, and the data samples vary.
Say, if the features are sorted in different rows and samples are given through columns, increasing the columns each time can be used for big o notation.
Anyway, I don't want to dwell on the big o notation primarily, cuz i believe it fails to understand my case as it's discussing the runtime versus features.
I have clarified the results in a detail manner in my paper, and i linked the plunge in runtime by increase in data features to the complexity of the structure (hydrocyclone) itself, and the easiness of models in comprehending the target under bigger number of given inputs.
Thanks for your time
  • asked a question related to Algorithm Analysis
Question
4 answers
I wanna compare between 4-5 docking tools. What will be the essential way as I little know about algorithms? Also, if you know any already done comparison research in this regard, providing me with this gonna be a great help. With best regards.
Relevant answer
Answer
As far as I know, every docking tool follows some unique algorithm to prepare a docking score.
For example Patchdock, Firedock emphasize only protein-ligand interaction ( for any docking-blind docking- grid-box docking). On the other hand, Haddock emphasizes pocket binding affinity score, Hawkdock emphasizes intramolecular affinity, some other docking protocol like Cluspro, Prodigy emphasizes IC50 value.
Actually, we can't claim a docking protocol/algorithm a good/bad docking algorithm only by analyzing the global binding energy of docking ( docking score).
There are some other factors like IC50 value of the ligand, size of ligand, molecular simulation, size docking pocket, presence/absence of allosteric side, ligand binding to the pocket or not, interaction residue.
For example, if you find a docking score of -100 KJ, it can be a good/best docking score. Still, if the 3D position of the ligand tels that the ligand does not reside in the binding pocket well or the IC50 value of the ligand is not that good or the binding interaction residues are histidine-excluding then the docking is not satisfactory, does not matter how big the docking score is!
Best Regards
  • asked a question related to Algorithm Analysis
Question
12 answers
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
Relevant answer
Answer
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
  • asked a question related to Algorithm Analysis
Question
5 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Algorithm Analysis
Question
5 answers
if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?
Relevant answer
Answer
Please read the paper of Energy and Wiener index of Total graph over Ring Zn . In this paper I calculated the distance between two nodes.
  • asked a question related to Algorithm Analysis
Question
8 answers
I am working on a research to demonstrate the advantages of using artificial intelligence resources in the entertainment sector and I would like to have more information about the most used algorithms to predict whether a series of tv, film, tv show or music will succeed and so demonstrate the competitive advantage that its use can bring to companies in the sector.
Relevant answer
Answer
The main important algorithms are producing the quality films which determine the story, placed on talented actors who are popular in the fine field, well connected networks of Advertisements through out the country where the film is going to be produced all matters for the success of the good films.
Moreover, the quality oriented technicians have to contribute their seriously.
The location is also matters to determine the film's is going to shoot because it will attract the audiences in general.
  • asked a question related to Algorithm Analysis
Question
3 answers
Hello
that is, so that I would like to implement a network that consists of 16 nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.
How can I proceed? Can anyone help me how I can implement a network and combine the heuristics in it using matlab or java.
  • asked a question related to Algorithm Analysis
Question
2 answers
I would like to implement a network that consists of few nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.
How can i code it ? Need a code for implement a network and combine the heuristics in, using matlab.
I approximately found a code below that matches my problem (see figur ) but the code counts the nearest neighbor directly but I want to divide the task myself and then it will count the nearest neighbor
Relevant answer
Answer
Based on the available code, make the following modification:
Define a weighted distance as the function of the costs. The rest of the code is suitable for your objective.
But, about dividing tasks, it is a little ambiguous. Please explain more.
  • asked a question related to Algorithm Analysis
Question
7 answers
Hello!
namely that I will implement a vehicle in node A and will then use dijkstra's algorithm to reach the shortest route. Example A is going to B, I would like to make a timer that shows me how long it takes to go from A to B. How can I implement a timer on java?
Relevant answer
Answer
i mean in which class? Wang Ting Dong
  • asked a question related to Algorithm Analysis
Question
12 answers
Any recommendation from a scientific journal to submit a paper on operations research applying linear programming and vehicle routing (VRP) using the B&B algorithm?
Relevant answer
Answer
You can ask your thesis advisor about what journal they think would be best to submit your work. It is hard to suggest a journal for you without seeing the actual paper.
  • asked a question related to Algorithm Analysis
Question
3 answers
NFL theorem is valid for algorithms training in fixed training set. However, the general characteristic of algorithms in expanded or open dataset has not been proved yet. Could you show your opinions about this or suggest some related papers?
Relevant answer
Answer
I think it is very complicated to determine and draw such general statements over all algorithms in general. I would suggest that even with the changes to the data characteristic, things such as concept drifts still have an impact and that therefore properties might be subject to change in the future. Since your application domain of ML models is mostly to predict this future, it should be taken into account that theoretical this infinite space will not be available since the future and its implication on algorithmic properties remain hidden.
Some further readings and discussions on the NFL with reagrd to ML:
  • asked a question related to Algorithm Analysis
Question
4 answers
Hi, I am looking to deblur a vehicle license plate number attached below. However, it is in a very low resolution. Now, i want a detect the license plate area and to deblur it and see the license plate number. Solutions and suggestions will be appreciated.
Relevant answer
  • asked a question related to Algorithm Analysis
Question
18 answers
Hi,
Whenever used Scikit-learn algorithm (sklearn.model_selection.train_test_split), is recommended to used the parameter ( random_state=42) to produce the same results across a different run.
why we used the integer (42)?
can we use another number?
thanks
Relevant answer
Answer
Yes, you can use a different number, It will produce a different outcome when compared to using 42, which can be used to evaluate your experiment in distinct scenarios.
Also, you are probably using '42' because of this (from wikipedia): The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the "Answer to the Ultimate Question of Life, the Universe, and Everything" =)
  • asked a question related to Algorithm Analysis
Question
5 answers
Say for arguments sake I have two or more images with different degrees of blurring.
Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?
Best regards and thanks
Relevant answer
Answer
  • asked a question related to Algorithm Analysis
Question
1 answer
Hello. 
I wanna ask several questions about Random Forest algorithm.
  1. Is it possible to calculate manually using Random Forest? 
  2. If we can calculate manually, can you please teach me how with this data sample i give?Assume i want to get price for:  Demands: normal,Duration: 30min,Distance: 2km
  3. Does the result we get from calculate manually will match the one we run with program?
The reason why i ask that questions is because i'm trying to make a pricing system app with Random Forest regression from Sklearn. The app will use that algorithm to calculate price based on several variables. I also want to know about mathematical theory about random forest and how it exactly works, so i can calculate manually and compare the result i get with the result from running the app.
Please help me. I really need to solve it asap.
I will appreciate any help. Thankyou and sorry for my bad English.
Relevant answer
Answer
Alvira Arisa , I believe it is possible, but cumbersome. See for example:
  • asked a question related to Algorithm Analysis
Question
4 answers
I developed an algorithm for recommender system and I want to check if the difference between my algorithm and the baselines is significant or not, I did a 5 cross-validation once. Is it enough or I should make the 5 cross validation many times and check the difference? Is there a minimum and maximum number of implementations for the statistical tests?
Relevant answer
Answer
It depends a bit on your data set size. With 5 fold cross validation you build your model on 80% of your data and predict the remaining 20%. You repeat this 5 times with different parts left out. If your data set is large enough, leaving out 20% probably doesn't matter. If not, ignoring 20% is a lot. You might be better off with 10 fold CV, ignoring only 10% at a time.
What you will also see is that is you repeat the 5 fold cv with different random seeds, you get slightly different outcome from the CV procedure. This is because of the random split in 5 folds. You best repeat this 5 fold CV a large number of times and report e.g., the mean and s.d. of your results. This prevents you are looking at a fluke of your results.
  • asked a question related to Algorithm Analysis
Question
10 answers
Hi,
I'm a software engineering undergraduate.
I have a dataset which includes the numerical values of variables x,y,z and output r.
I want to create an algorithm which basically uses a prediction algorithm using neural networks and finds the relationship/correlation between x,y,z and predict r and/or also i want to forecast the r value ?
what type of algorithms should i look into?
Thank you.
Relevant answer
Answer
You can use Random forest and SVM for forcasting
  • asked a question related to Algorithm Analysis
Question
25 answers
I want to detect circular-like (Blobs) objects in an image.
1) What are all the possible features which can be used to recognize these objects?
2) How can I find the eigenvalues for each component in the image  alone?
Any MATLAB code(s) is really helpful?
Relevant answer
Answer
  • asked a question related to Algorithm Analysis
Question
3 answers
I am working on a project where I want to implement time stretching and pitch shifting with deep learning methods as a first thing. I tried searching the internet but I haven't come across many papers or articles to start with. If anyone has the clue, a little help would be really appreciated.
Relevant answer
Answer
Hello!
I personally found the following papers quite helpful for having a basic overview on audio processing in conjunction with deep learning.
  • asked a question related to Algorithm Analysis
Question
3 answers
I need the time complexity in form of asymptotic order for different watermarking techniques like LSB, DFT, DCT, DWT, SVD, WHT, etc. I there any paper in which the detailed comparison is carried out??
Relevant answer
Answer
The time consuming in the information hiding systems like a Watermarking is based on the method used to watermark the logo. Also the pre-processing stage of the cover image plays an important criteria to determine the complexity of the time .
  • asked a question related to Algorithm Analysis
Question
8 answers
I would like to make a comparison on the performance of some regression algorithms according to different performance criteria, including Root Mean squared Error (RMSE), coefficient of determination (R2), and Mean Absolute Percentage Error (MAPE). I found a problem with MAPE.
For example, target values and predicted values correspond to t and y, respectively.
t=[1 , 2 , 3 , 4, 5, 6, 7, 8, 9, 10]
y=[2 , 1, 3 , 4.5 , 5 , 7 , 7, 8 , 9 ,12]
The following performance criteria are obtained:
MAPE: 19.91
RMSE: 0.85
R2: 0.91
While RMSE and R2 are acceptable, the MAPE is around 19.9%, which is too high. My question is that what is the main reason for this high value of MAPE?(compared to acceptable values for RMSE and R2 )
Thanks in advance
Relevant answer
Answer
Pooria Behnam Hope it was helpful. Let me recommend our new detailed framework for forecast evaluation and visualization:
  • asked a question related to Algorithm Analysis
Question
5 answers
There are several ways to validate quantitative research findings with the help of different programs like SPSS and other stat tools. I want to ask as if there is any method to validate or proof the results of qualitative research findings like interviews and how to include this qualitative date in the research article.
  • asked a question related to Algorithm Analysis
Question
2 answers
Construct an interaction network by co-expression or protein interaction, and then screen key genes according to network topology on Cytoscape. According to the analysis of some workers, the important hub genes can be screened out by algorithmic analysis of the weighted reconnection of network structure and nodes. Through 12 algorithms, the hub gene can be found more accurately. If all 12 algorithms think this gene is important, it indicates that this gene is very research-oriented. But the more critical genes from these twelve algorithms are sometimes not stable.
Relevant answer
Answer
Hi Ruiqi,
I'm not quite sure if it's relevant to what you're working on or not. But anyway, if you made any network with the purpose of finding the right candidates, you may consider using the IVI, an integrative algorithm, for finding the most influential nodes within the network! It is included as a function in the Influential R package (https://cran.r-project.org/web/packages/influential/index.html).
  • asked a question related to Algorithm Analysis
Question
9 answers
Most of recent books in longitudinal data analysis I have come through have mentioned the issue of unbalanced data but actually did not present a solution for it. Take for example:
  • Hoffman, L. (2015). Longitudinal analysis: modeling within-person fluctuation and change (1 Edition). New York, NY: Routledge.
  • Liu, X. (2015). Methods and applications of longitudinal data analysis. Elsevier.
Unbalanced measurements in longitudinal data occurs when participants of a study are not measured at the exact same points of time. We gathered big, complex and unbalanced data. Data comes from arousal level which is measured every minute (automatically) for a group of students while engaging in learning activities. Students were asked to report on what they felt while in the activities. Considering that not all students were participating in similar activities in the same time and not all of them were active in reporting their feelings, we end up with unstructured and uncontrolled data which does not reflect a systematic and regular longitudinal data. Add to this issue the complexity of the arousal level itself. Most of longitudinal data analysis assume the linearity (the outcome variable changes positively/negatively with the predictors). Clearly that does not apply to our case, since the arousal level fluctuates over time.
My questions:
Can you please specify a useful resource (e.g., book, article, forum of experts) to analysis unbalanced panel data?
Do you have yourself any idea on how one can handle unbalanced data analysis?
Relevant answer
Answer
I do not have experience of this level of complexity in repeated measures , but it seems to me that you actually have multiple episodes - what you call events. So I am suggesting a three level structure with an identifier at level 3 for individuals; at level 2 for episodes within individuals and level 1 for repeated occasions within a specific episode; the minute by minute recording of repeated measurements. You could have variables measured at all three levels. The general approach is considered here
Fiona Steele (2008) Multilevel Models for Longitudinal Data, Journal of the Royal Statistical Society. Series A (Statistics in Society)Vol. 171, No. 1 (2008), pp. 5-19
  • asked a question related to Algorithm Analysis
Question
5 answers
Hey Colleagues,
I hope you are healthy and safe during this quarantine.
For any machine learning model, we evaluate the performance of the model based on several points, and the loss is amongst them. We all know that an ML model:
1- Underfits, when the training loss is way more significant than the testing loss.
2- Overfits, when the training loss is way smaller than the testing loss.
3- Performs very well when the training loss and the testing loss are very close.
My question is directed to the third point. I am running a DL model (1D CNN), and I have the following results: (Note that, my initial loss was 2.5)
- Training loss = 0.55
- Testing Loss = 0.65
Nevertheless, I am not quite sure if the results are acceptable. Since the training loss is a bit high (0.5). I tried to lower the training loss by giving more complexity to the model (Increasing the number of CNN layers and MLP layers); however, this is a very tricky process as whenever I increase the complexity of the architecture, the testing loss increases, and the model easily overfits.
Finally, to say that our model performed very well, should we get a low training loss (say less than 0.1) or my case is still considered good too?
I look forward to hearing from you,
Thanks and regards,
Relevant answer
Answer
That seems quite close really.
If you want to really get them closer, you could add a Dropout/SpatialDropout layer, which would help prevent overfitting.
  • asked a question related to Algorithm Analysis
Question
6 answers
I am fairly certain that convex algorithms are faster/ more efficient then non convex algorithms.
But so far I was unable to find a source for this claim. Can you help me find a trustworthy source?
Relevant answer
Answer
Your question is ill-posed.
Perhaps you mean "Are most methods devised for convex problems faster than those for non-convex ones?". In that case I think the answer is "Yes".
There is no "convex algorithm", but an "algorithm for convex programs".
  • asked a question related to Algorithm Analysis
Question
11 answers
Any algorithm ? Many Citations still go unrecognized !! But are still atleast entered or feeded manually by respective author. Why doesn't it use a universal, simple method to update its citations? I am also aware of the fact about the very large database of Google Scholar! I may be ignorant !!!
Relevant answer
Answer
no solution yet for this matter.
  • asked a question related to Algorithm Analysis
Question
5 answers
why it is better to implement the same algorithm but without loops?
Example:
If I have an array of values, I want to substitute them in a specific equation.
I can do that by 2 ways:
1) loop for each value of array.
2) I can write all the arrays manually (if the size of array is small) under each other then write the mentioned equation then run the program for calculation.
SO why it is better to avoid 1) and follow 2)
Does the loop have a bad effect on controllers ?? does they cause additional lag than manual technique ?
I kept thinking about that, I noticed that there are always a difference in performance but I do't know why. 2 is always better than 1 !!
Relevant answer
Answer
Unrolling the loop (i.e. explicitly coding each iteration without using for...while...until structures) is (almost) always processed faster, assuming the code is already loaded into RAM. This happens because the CPU doesn't have to check if the end condition is met (this is of greater importance if the condition is a long and complex logical statement) and increment the counter/pointer (this is generally less important) for each iteration. This is especially true for (some) interpreted languages since, in this case, we have to account for the additional time and process burden of interpreting the loop instructions.
But... is it worth it? It depends: if the code inside the loop is substantial (i.e., "CPU-heavy"), the speed gains could be marginal, at best.
Unrolling loops always wastes more memory. If the unrolled loop does not fit into the available RAM (i.e., it has to be loaded from disk/flash in parts), the result may even be worse! This rarely happens on PCs, of course... but might be a factor in modestly resourced micro-controller solutions.
So... unrolling the loops is only worth it in very particular circumstances.
Generally, it should be avoided unlesse strictly critical processing times/speeds need to be met.
Nowadays, with plentiful machine resources... it's almost always a bad idea: it's wasteful and produces terribly inelegant, ugly, unconcise and difficult to read code.
  • asked a question related to Algorithm Analysis
Question
5 answers
DIssipation is the reflection of irreversibility in the processes of nature. How does it reflect itself in various laws an principles ?
Relevant answer
Answer
It's very simple - the more scattering channels and bandwidth of these channels - the more dissipation. And vice versa.
  • asked a question related to Algorithm Analysis
Question
2 answers
I am a materials science undergrad, interested to know the algorithms for numerical integration of equation of motion in computational materials science, like molecular dynamics. It is said that, time-reversal symmetry is essential for such simulations, while classic integration schemes like Trapezoidal, simpsons or weddle methods handle previous and next time step differently. So verlet algorithm is used instead.
Position verlet indeed adds previous and past timesteps and maintains time-reversal symmetry. But velocity verlet doe not. Why is time-reversal symmetry not important for velocity? is it because time reversal symmetry is meaningful only for position and its even derivatives, as in newton's law of motion?
My knowledge on Numerical analysis is only of introductory level, and i have not deeply studied Lagrangian, chaos theory, group theory or hyper-dimensional geometry yet.
Relevant answer
Answer
Both satisfy the time-reversibility because the discrete flow map, F(dt), is time reversible:
F(dt) = exp(i L1 dt/2) exp(i dt L2) exp(i L1 dt/2)
where dt is the time step and
iL1 = a(t) (\partial / \partial v)
iL2 = v(t) (\partial / \partial x)
a = acceleration
v = velocity
x = position
Applying this operator (F(dt)), the velocity-Verlet algorithm is obtained. Now, check it
F(dt)F(-dt) = 1
If we take
F(dt) = exp(i L2 dt/2) exp(i dt L1) exp(i L2 dt/2)
Then, the position Verlet algorithm is obtained, which is time-reversible too, F(dt)F(-dt)=1.
  • asked a question related to Algorithm Analysis
Question
2 answers
Hi,
We developed a subpixel image registration algorithm for finding sub-pixel displacement and I want to test that against existing methods. I have compared that with subpixel image registration algorithm by Guizar et al. and also the algorithm developed by Foroosh et al. Does anyone knows any other accurate algorithm for subpixel image registration (preferably with an open-source code)?
Thank you.
Relevant answer
Answer
Hi Amir,
Hope you have find the solution.
  • asked a question related to Algorithm Analysis
Question
49 answers
Dear all,
I am interested in the idea of using AI algorithms in analysing civil structures. Could I connect to any researchers with the same interest? I appreciate all the advice!
Regards,
Hoan Nguyen
Relevant answer
Answer
Artificial intelligence, Genetic Algorithms (and its variants like Genetic Programming and Gene Expression Programming) are very good tools to use in your research for optimization purpose. I would suggest you to understand the basic concepts first (you can find alot of material regarding this on internet). Also you must have some data in hand before taking a start (data can be yours or secondary data from already published literature). ANN and GEP are the most common techniques used in Civil Engineering and you can goto www.sciencedirect.com and give these keywords.
I have also attached some articles on GEP and ANN which are mostly related to structures.
Hope it helps.
  • asked a question related to Algorithm Analysis
Question
6 answers
We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.
Relevant answer
Answer
There are several journal for algorithms. Some of them are:
Algorithmica
The Computer Journal
Journal of Discrete Algorithms
ACM Journal of Experimental Algorithmics
ACM Transactions on Algorithms
SIAM Journal on Computing
ACM Computing Surveys
Algorithms
Close related:
Theoretical Computer Science
Information Systems
Information Sciences
ACM Transactions on Information Systems
Information Retrieval
International Journal on Foundations of Computer Science
Related:
IEEE Transactions on Information Theory
Information and Computation
Information Retrieval
Knowledge and Information Systems
Information Processing Letters
ACM Computing Surveys
Information Processing and Management
best regards,
rapa
  • asked a question related to Algorithm Analysis
Question
2 answers
I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.
Relevant answer
Answer
I've found a description with algorithm implemented in R for you. I hope it helps: http://www.joyofdata.de/blog/testing-linear-separability-linear-programming-r-glpk/
Another nice description with implementation in python you can check as well: https://www.tarekatwan.com/index.php/2017/12/methods-for-testing-linear-separability-in-python/
  • asked a question related to Algorithm Analysis
Question
4 answers
According to Shannon in classical information theory H(x) > f(H(x)) for an entropy H(x) over some random variable x, with an unknown function f. Is it possible that an observer that doesn’t know the function (that produces statistically random data) can take the output of the function and consider it random (entropy)? Additionally, if one were to use entropy as a measure between two states, what would that ‘measure’ be between the statistically random output and the original pool of random?
Relevant answer
Answer
@Nader Chmait
I imagine the approximate entropy notion may apply as an exercise of calculation. I am not sure the entropy and the approximate entropy reflect the same meaning.
The approximate entropy seems closer to the statistics, but Shannon entropy is closer to information content.
  • asked a question related to Algorithm Analysis
Question
2 answers
Here I wondering how to analyze the EEG signal in order to detect lie. I hope to study about creating machine learning model using pre-processed EEG signals to detect lie. Help me on how can i pre-process EEG signal. what algorithm to use
Relevant answer
Answer
I am not an expert in the field but as far as I know the EEG component used for lie detection is an ERP called P300. Here I attach a conference paper in which they used SVM method for lie detection. I hope you will find it useful.
  • asked a question related to Algorithm Analysis
Question
2 answers
Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?
Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.
Relevant answer
Answer
The DP algorithm for LCS finds the solution in O(n*m) complexity. This is polynomial time. The reduction you are looking for is the problem itself. If you solve this problem in an alphabet with uncertain size, you solve it for size 2.
  • asked a question related to Algorithm Analysis
Question
3 answers
Is there a standard formula in getting the evaluation fitness in the Cuckoo Search algorithm? Or can any formula be used in evaluating the fitness?
Hope you can help me.
Relevant answer
Answer
This should be built in base on the particular reuqriments.
  • asked a question related to Algorithm Analysis
Question
5 answers
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Relevant answer
Answer
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
  • asked a question related to Algorithm Analysis
Question
3 answers
Consider m data points in n dimensional data space, and we want to fit a minimal hyper sphere, which covers all these data points .
to find center and radius of such minimal hyper sphere is NP task. I am interested in efficient algorithms, which may be either exact algorithm or approximation algorithm or randomized algorithm.
Relevant answer
Answer
Could the center of mass of a system of points be a center of the sphere, and the most distant point from the center determine the radius?
  • asked a question related to Algorithm Analysis
Question
9 answers
Vector Autorgressive (VAR) models are widely used to study the response to shocks in independents variables, but what if you want to study more than one shock and each shock has different effect, if you think the effect is temporary and want to estimate the time length of each shock, I mean how long it takes for the response variable to return to normal state at each shock, you want also to see if the effect is instant or lagged, which models could be used?
Could you suggest models other than VARs that perfectly fit for this case?
Thanks in advance
Relevant answer
Answer
Hi, I will try to elaborate on the previous two answers that seem to be perfect to me.
If you want to analyze the dynamic response of a variable to a given shock, the usual way to go is to implement an impulse-response analysis. The "impulse" is your exogenous shock. The "response" is the dynamic "effect" on the variable of your interest. In stationary environments, the "response" should decay to zero as times goes by. Accordingly, some researchers report the maximum effect, or the cummulative efect. Some other researchers are concerned with the short time effect. Others with longer term effects and you name it. Given that the response is suppose to die out, you can analyze the "duration" of the effect as the "last period for which the effect was different from zero at the 5% significance level", for instance. This of course requires computation of the impulse-response confidence bands, that are readily available in many packages.
Impulse-Response analysis is not resricted to VARs. It can be implemented in general systems, including ARIMA models, DSGEs, and multiple equations models. Furthermore, the "impulse" or initial shock can be modeled as you wish. It can be defined only on one variable, or on multiple variables at the same time. This last "shock is a "multivariate" shock. Moreover, sometimes it is interesting to analyze what happens when you are faced with three shocks in a row of the same magnitude. So, in summary, you need to focus on the implementation of impulse-response analysis.
The case of contemporaneous effects is trickier. One way to go is to model the problem using a Structural VAR as suggested by Tihana.
You can also model "contemporaneous " effects with a traditional VAR but using shocks that are contemporaneous. One direct way to do this is to use the orthogonalized impulse-response analysis that relies on the structure of the correlation matriz of the error terms to designed the type of multivariate shock you will be using.
Interesting discussion, Hope it helps!!
Best Regards!!!
  • asked a question related to Algorithm Analysis
Question
3 answers
I read some scientific papers and most of them are using data dependency test to analyse their code for parallel optimization purpose. one of the dependency test is that, Banerjee's test. Are there other tests that can provide better result testing data dependency? and is it hard to do a test on control dependency code, and if we can, what are some of the technique that we can use?
Thank you
Relevant answer
Answer
Hisham Alosaimi sorry forgot to share the link for the slides. Here you go
  • asked a question related to Algorithm Analysis
Question
5 answers
In multi-objective optimization, how to calculate the average inverted generational distance (IGD) value of each generation over N independent runs? I need to get something similar to the attached picture.
Relevant answer
Answer
You need to calculate IGD values for the pareto solutions acheived in each iteration I w.r.t. to the refrence point (optimal pareto) considered. Store it in a matrix/variable.
You should do the same for N independent runs. Find the average of the values, ultimately you will get a separate column for the average of IGD for each iterations. Now you can use the results of average values to plot the graph.
For an example, see in attached image. For I=10 and N=3 , you can obtain "Avergae IGD" values to plot.
  • asked a question related to Algorithm Analysis
Question
2 answers
I have tried CHOPCHOP and E-CRISPR-TEST but they gave me different results in terms of genome off-targets for the same sequence! which online tool is the most trustworthy?
Relevant answer
Answer
I suggest you use multiple online tools, I tried four: Dharmacon, Genescript, Chopchop and invitrogen (not sure if those companies use the same database) and there are more, but eventually I found common sequences which were ranked high as having few off target in multiple tools.
  • asked a question related to Algorithm Analysis
Question
5 answers
I have two data sets (sample size 300). One contains different characteristics of a certain group of people like education, skills, age, income and so forth. Other set contains different job requirements like education, skills, and job related problems and so on. I have classified this two data sets separately into different groups using K means cluster analysis in SPSS. I want to match different groups from this two data sets for finding which group of people is eligible for which types of jobs. Would you please recommend me a suitable matching algorithm for this analysis?
I am not an expert in programming language. I am looking for a software like spss or excel to do this.
Thanks.
Relevant answer
Answer
Sir, I think I have a solution for your problem based on generating some functions that can correlate the variables. We can discuss about the problem face to face if you have no problem. (Thank you)
  • asked a question related to Algorithm Analysis
Question
18 answers
The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm?
Can you direct me to new research in this subject, or for approximate farthest point AFP?
Relevant answer
Answer
Thanks Vincent, a hybrid the proposed methods might be better, but there is only one way to find out, Experiments!
  • asked a question related to Algorithm Analysis
Question
1 answer
I'm implementing this algorithm using excel to build my grid and vba programming, I am solving the problem in terms of mass flow (air) and junction (nodes) pressures. I am not able to get convergence since the presssures in the nodes always have a deltaP.
Relevant answer
  • asked a question related to Algorithm Analysis
Question
10 answers
Dear experts,
Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.
Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.
The question is that:
How could we send D units of flow from s to t through these paths in the quickest time?
Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.
  • asked a question related to Algorithm Analysis
Question
12 answers
for Photovoltaic based water pumping system
Relevant answer
Answer
Sanjay is quite right to warn about the motor transients - but I don't think you state what sort of motors they are, and different sorts have very different start up characteristics. If they are DC motors then you could indeed 'lock up' the system if the inrush current under stall conditions is insufficient to start the motor turning while at the same time being the most that the array can supply. Note that an MPPT of any sort adapts the output impedance of the SOURCE to the line, it doesn't handle the input impedance of the LOAD very well especially if that is very variable like a DC motor would be. If on the other hand you use a DC to AC inverter and induction motors to drive your pumps then you should be much better off. But that is more expensive.
  • asked a question related to Algorithm Analysis
Question
2 answers
I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.
Relevant answer
Answer
Dear Imran,
I suggest you to see links in subject.
-EXPLORING ALGORITHMS FOR EFFECTIVE ... - Semantic Scholar
-Graph-Theoretic Concepts in Computer Science: 24th International ...
-Expanders via Random Spanning Trees
Best regards
  • asked a question related to Algorithm Analysis
Question
9 answers
what makes the difference between the real and the inertial time of a numerical computation of physical phenomena mean?
Relevant answer
Answer
Neither term makes sense. There's only proper time, that's invariant under global Lorentz transformations, that map an inertial frame to another and transforms in a well-defined way under local Lorentz transformations.
  • asked a question related to Algorithm Analysis
Question
5 answers
I have a data set that contains in-game players actions and interactions (stored as metrics). I want to create a recognition system that contains a set of predefined rules and takes as inputs the collected metrics. As an output the system will determine the player type. How to categorize players based on the collected data and the predefined rules? What are the possible algorithms/approaches?
PS: The data set contain only 23 entries (not enough to train the recognition system).
Many thanks.
Relevant answer
Answer
Hello Nabila,
First of all, the data set has too much less data points. Hardly any good classifier may work well on this. So, my suggestion is to increase more observations for better classification accuracy.
Thanks,
Sobhan
  • asked a question related to Algorithm Analysis
Question
7 answers
I have gone through definitions of the term Asymptotic, and at one place I found the following:
A 'Line' that continually approaches a given curve but does not meet it at any finite distance.
I want to ask whether the above definition is OK? If yes then I have a question that we see that in the representation of Big Oh Notation the two functions f(n) and g(n) are represented by curves and NOT by line. So how could we take the above definition to understand the Asymptotic Notations?
Relevant answer
Answer
Dear Muhammad Yasir,
Asymptotic or asymptotically close to are treated synonymous in time complexity terminology of computer science.
Consider f(n)= n^3+ 4 n^2 + n and g(n)=n^3 - 4 n^2 + n defined for all n>3. Clearly the functions are not identical. However, for large integer values of n, the term that dominates both functions is n^3. Further limit_{ n \rightarrow \infty } f(n)/ g(n) and limit_{ n \rightarrow \infty } g(n)/ f(n) both equal 1. Thus, the functions are asymptotically equivalent. Further, even if limit_{ n \rightarrow \infty } f(n)/ g(n)= \alpha for some positive real \alpha then also they are treated as equivalent. Note that in this case limit_{ n \rightarrow \infty } g(n)/ f(n)= 1/(\alpha) and because RHS here also is a constant this condition suffices. That is, f(n)= n^3+ 4 n^2 + n and 2g(n) and g(n)/2 and f(n)*k (for some positive k) all are equivalent.
I hope that this helps.
Regards.