Science topic

# Algorithm Analysis - Science topic

Explore the latest questions and answers in Algorithm Analysis, and find Algorithm Analysis experts.

Questions related to Algorithm Analysis

Hello everyone,

Could you recommend courses, papers, books or websites about algorithms that support missing values?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

Dear colleagues,

I was wondering whether there are any ways or any softwares that I can use to analysis the muscle cross sectional area in my H&E histology images?

I tried to use ImageJ thresholding, unfortunately it does not work efficiently for me. Thus, I was wondering whether there are any currently established methods.

Thank you very much in advance.

Im looking for method to find the best path with more alternative path. I know djikstra to find shortest path but my research seems like find longest path and path shape is like a tree and i dont know the method is. Example my problem is we have financial manager, before be a financial manager, he must be an Assistant Financial Manager (A) or Assistant Budget manager (B). If Assistant Financial Manager got optimal performance score (4 point) and Assistant Budget Manager got potential score (3 point). And before be an assistant financial manager he must be a financial and accounting supervisor (C) and before be an assistant budget manager he must be a general affair supervisor (D). If financial and accounting supervisor got potential score (3 point) and general affair supervisor got potential score also (3 point) how we determine the best path with maximize performance score?. Imagine the path like Z - A(4) - C(3) also Z - B(3) - D(3).

If we some how transform a Binary Search Tree into a form where no node other than root may have both right and left child and the nodes the right sub-tree of the root may only have right child, and vice versa, such a configuration of BST is inherently sorted with its root being approximately in the middle (in case of nearly complete BST’s). To to this we need to do reverse rotations. Unlike AVL and red black trees, where roatations are done to make the tree balanced, we would do reversed rotations.

I would like to explain the pseudo code and logical implementation of the algorithm through the images in following pdf. The algorithm is to first sort the left subtree with respect to the root and then the right subtree. These two subparts will be opposite to each other, that is, left would interchange with right. For simplicity I have taken a BST with right subtree, with respect to root, sorted.

To improve the complexity as compared to tree sort we can augment the above algorithm. We can add a flag to each node where 0 stands for a normal node while 1 is when the node has non null right child, in the original unsorted BST. The nodes with flag 1 have an entry in a hash table with key being their pointers and the values being the right most node. For example node 23's pointer would map to 30.5's pointer. Then we would not have to traverse all the nodes in between for the iteration. If we have 23's pointer and 30.5's pointer we can do the required operation in O(1). This will bring down time complexity , as compared to tree sort.

Please review the algorithm and give suggestion if this algorithm is of any use and if I should write a research paper on it.

Is it possible that at "some moments" the calculation time of loop or algorithm increases more than the sampling time of the system ??

or it is completely wrong ?!!

Dear users.

I need to construct the plots in excel and R.I've 2000 data and these are large numbers (millions, because this is profit).

Could you tell me please,how I can to do it in excel and r with a goal to look years 200-2005 on the horizontal axis?

Excel:=series(;'63'!$A$1:$A$204;'63'!$D$1:$D$346;1)

R:

data <- read.table(file = "2.txt", header = TRUE)

head(data)

plot(data,type = "l", col = "red")

Doesn't work correctly

Thank you very much

I have 2 equation, and in each equation i have a coefficient of interest:

lets say:

eq1: Y=

**b**X+2cX2+4eq2: Y=

**a**X2+2dX+12Giving that the value of a and b are changing over time.

And I am aiming to record the values of a in list A and b in another list B

And from their behaviour i want to draw conclusion about the strength of these coefficients.

But i am a bit confused about how to draw such conclusion and what is the most representative way to monitor a and b behaviour change over time.

Or its better to monitor the increase or decrease of coefficient by summing the difference of recorded values over time.

I have more coefficients to be monitored, and they may have value or not. and my aim is to build meaningful classification that can categorise coefficients as useful or not.

Dear collegues.

I would like to ask,if anybody works with neural networks,to check my loop for the test sample.

I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.

It means, I need to shift each time by one month with 5 elements:

train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.

The loop is:

shift <- 4

number_forecasts <- 1

d <- nrow(maxmindf)

k <- number_forecasts

for (i in 1:(d - shift + 1))

{

The code:

require(quantmod)

require(nnet)

require(caret)

prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)

temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)

soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)

rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)

df=data.frame(prov,temp,soil,rain)

mydata<-df

attach(mydata)

mi<-mydata

scaleddata<-scale(mi$prov)

normalize <- function(x) {

return ((x - min(x)) / (max(x) - min(x)))

}

maxmindf <- as.data.frame(lapply(mydata, normalize))

go<-maxmindf

forecasts <- NULL

forecasts$prov <- 1:22

forecasts$predictions <- NA

forecasts <- data.frame(forecasts)

# Training and Test Data

trainset <- maxmindf()

testset <- maxmindf()

#Neural Network

library(neuralnet)

nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)

nn$result.matrix

plot(nn)

#Test the resulting output

#Test the resulting output

temp_test <- subset(testset, select = c("temp","soil", "rain"))

head(temp_test)

nn.results <- compute(nn, temp_test)

results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)

}

minval<-min(x)

maxval<-max(x)

minvec <- sapply(mydata,min)

maxvec <- sapply(mydata,max)

denormalize <- function(x,minval,maxval) {

x*(maxval-minval) + minval

}

as.data.frame(Map(denormalize,results,minvec,maxvec))

Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?

I am very grateful for your answers

Dear collegues.

I've 400 data (monthly) and I need to construct the forecast for each next month with using learning (training ) sample 50.

It means, I need to shift each time by one month with 50 elements.

train<-1:50, train<-2:51, train<-3:52,...,train<-351:400.

Could you tell me please,which function can I write in the program for automatic calculation?

Maybe, for() loop?

I am very grateful for your answers

Hello everyone,

Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?

Thank you for your attention and valuable support.

Regards,

Cecilia-Irene Loeza-Mejía

I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.

Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.

Thanks in advance.

If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?

Any assistance or refer to scientific papers that can help me?

Any new innovations other than First-Come, First-Served (FCFS) / Shortest-Job-First (SJF)/ Round Robin (RR) or mix-development of those?

Wyoming recently recognized a new legal entity called a DAO LLC, or a decentralized algorithmic organization, which can be managed by an algorithm. The Wyoming law is largely silent on the legal requirements for the algorithm manager and its design, features, or policies. Any suggestion?

Hello dear researchers.

I run the siamfc ++ algorithm. It showed better results than the other algorithms but its accuracy is still not acceptable. One of my ideas is to run an object detection algorithm first and then draw a bunding box around the object I want to track.I think by doing this I have indirectly labeled the selected object. So the results and accuracy should be improved compared to the previous case. Do you think this is correct? Is there a better idea to improve the algorithm?

Thanks for your tips

Why Particle Swarm Optimization works better for this classification problem?

Can anyone give me any strong reasons behind it?

Thanks in advance.

I am analysing the data collected by Questionnaire survey, which consists socio demographic as well as likert scale based questions related to satisfaction with public transport. I am developing predictive model to predict Public perceptions to use the public Transport based on their socio demographic and satisfaction level.

I could not found any related reference to CITE. Therefore, I wanna make sure that my study direction is in right direction.

Dear all,

I have written a paper about predicting the separation efficiency of hydro-cyclone separators by means of machine learning algorithms. To this end, I have collected 4000 single data comprised of 14 inputs (14 features of hydrocyclone separator), and one target (separation efficiency).

I have been asked by the journal to include an analysis into the computational complexity of applied algorithms (ANFIS, MLP, LSSVM, RBF), in terms of either run-time or big-o-notation. However, as I understand, the run time must increase commensurately by increment in the size of data (or size of inputs). But, strangely I have found out that by the increase in data size, the run-time for the respective models reduces dramatically. So, I am left puzzled how to report this, cuz as far as I know the big-o-notation diagrams cannot have a negative slope (the reduction of runtime by the increase in data size).

From an engineering point of view, this can be justified by bearing in mind that the algorithms can better recognize the target when higher characteristics of hydro cyclones (inputs) are fed. But, this remains as a paradox analyzing it through AI engineering.

I appreciate it if you help me out with this matter

thanks

I wanna compare between 4-5 docking tools. What will be the essential way as I little know about algorithms? Also, if you know any already done comparison research in this regard, providing me with this gonna be a great help. With best regards.

Hello scientific community

Do you noting the following:

[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]

Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.

I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.

The repeated algorithms must be disappear and the complex also.

The dependent algorithms must be disappeared.

We need to benchmark the MHs similar as the benchmark test suite.

Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.

Thanks and I wait for the reputable discussion

There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.

So what are the most important ways to test the performance of smart optimization algorithms in general?

if I have 2 nodes with a given coordinate system (x, y) on both. Can I calculate the distance between the nodes using an algorithm? for example dijkstras or A *?

I am working on a research to demonstrate the advantages of using artificial intelligence resources in the entertainment sector and I would like to have more information about the most used algorithms to predict whether a series of tv, film, tv show or music will succeed and so demonstrate the competitive advantage that its use can bring to companies in the sector.

Hello

that is, so that I would like to implement a network that consists of 16 nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.

How can I proceed? Can anyone help me how I can implement a network and combine the heuristics in it using matlab or java.

I would like to implement a network that consists of few nodes (see the figure below) after I have implemented it, I want to combine the network with a heuristic and it becomes the nearest neighbor heuristic. Given that I have the costs between the nodes. The vehicle in the middle should travel and represents the shortest route.

How can i code it ? Need a code for implement a network and combine the heuristics in, using matlab.

I approximately found a code below that matches my problem (see figur ) but the code counts the nearest neighbor directly

**but I want to divide the task myself and then it will count the nearest neighbor**Hello!

namely that I will implement a vehicle in node A and will then use dijkstra's algorithm to reach the shortest route. Example A is going to B, I would like to make a timer that shows me how long it takes to go from A to B. How can I implement a timer on java?

Any recommendation from a scientific journal to submit a paper on operations research applying linear programming and vehicle routing (VRP) using the B&B algorithm?

NFL theorem is valid for algorithms training in fixed training set. However, the general characteristic of algorithms in expanded or open dataset has not been proved yet. Could you show your opinions about this or suggest some related papers?

Hi, I am looking to deblur a vehicle license plate number attached below. However, it is in a very low resolution. Now, i want a detect the license plate area and to deblur it and see the license plate number. Solutions and suggestions will be appreciated.

Hi,

Whenever used Scikit-learn algorithm (sklearn.model_selection.train_test_split), is recommended to used the parameter ( random_state=42) to produce the same results across a different run.

why we used the integer (42)?

can we use another number?

thanks

Say for arguments sake I have two or more images with different degrees of blurring.

Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?

Best regards and thanks

Hello.

I wanna ask several questions about Random Forest algorithm.

- Is it possible to calculate manually using Random Forest?
- If we can calculate manually, can you please teach me how with this data sample i give?Assume i want to get price for: Demands: normal,Duration: 30min,Distance: 2km
- Does the result we get from calculate manually will match the one we run with program?

The reason why i ask that questions is because i'm trying to make a pricing system app with Random Forest regression from Sklearn. The app will use that algorithm to calculate price based on several variables. I also want to know about mathematical theory about random forest and how it exactly works, so i can calculate manually and compare the result i get with the result from running the app.

Please help me. I really need to solve it asap.

I will appreciate any help. Thankyou and sorry for my bad English.

I developed an algorithm for recommender system and I want to check if the difference between my algorithm and the baselines is significant or not, I did a 5 cross-validation once. Is it enough or I should make the 5 cross validation many times and check the difference? Is there a minimum and maximum number of implementations for the statistical tests?

Hi,

I'm a software engineering undergraduate.

I have a dataset which includes the numerical values of variables x,y,z and output r.

I want to create an algorithm which basically uses a prediction algorithm using neural networks and finds the relationship/correlation between x,y,z and predict r and/or also i want to forecast the r value ?

what type of algorithms should i look into?

Thank you.

I want to detect circular-like (Blobs) objects in an image.

1) What are all the possible features which can be used to recognize these objects?

2) How can I find the eigenvalues for each component in the image alone?

Any MATLAB code(s) is really helpful?

I am working on a project where I want to implement time stretching and pitch shifting with deep learning methods as a first thing. I tried searching the internet but I haven't come across many papers or articles to start with. If anyone has the clue, a little help would be really appreciated.

I need the time complexity in form of asymptotic order for different watermarking techniques like LSB, DFT, DCT, DWT, SVD, WHT, etc. I there any paper in which the detailed comparison is carried out??

I would like to make a comparison on the performance of some regression algorithms according to different performance criteria, including Root Mean squared Error (RMSE), coefficient of determination (R2), and Mean Absolute Percentage Error (MAPE). I found a problem with MAPE.

For example, target values and predicted values correspond to t and y, respectively.

t=[1 , 2 , 3 , 4, 5, 6, 7, 8, 9, 10]

y=[2 , 1, 3 , 4.5 , 5 , 7 , 7, 8 , 9 ,12]

The following performance criteria are obtained:

MAPE: 19.91

RMSE: 0.85

R2: 0.91

While RMSE and R2 are acceptable, the MAPE is around 19.9%, which is too high. My question is that what is the main reason for this high value of MAPE?(compared to acceptable values for RMSE and R2 )

Thanks in advance

There are several ways to validate quantitative research findings with the help of different programs like SPSS and other stat tools. I want to ask as if there is any method to validate or proof the results of qualitative research findings like interviews and how to include this qualitative date in the research article.

Construct an interaction network by co-expression or protein interaction, and then screen key genes according to network topology on Cytoscape. According to the analysis of some workers, the important hub genes can be screened out by algorithmic analysis of the weighted reconnection of network structure and nodes. Through 12 algorithms, the hub gene can be found more accurately. If all 12 algorithms think this gene is important, it indicates that this gene is very research-oriented. But the more critical genes from these twelve algorithms are sometimes not stable.

Most of recent books in longitudinal data analysis I have come through have mentioned the issue of unbalanced data but actually did not present a solution for it. Take for example:

- Hoffman, L. (2015).
*Longitudinal analysis: modeling within-person fluctuation and change*(1 Edition). New York, NY: Routledge. - Liu, X. (2015).
*Methods and applications of longitudinal data analysis*. Elsevier.

Unbalanced measurements in longitudinal data occurs when participants of a study are

**not**measured at the exact same points of time. We gathered big, complex and unbalanced data. Data comes from arousal level which is measured every minute (automatically) for a group of students while engaging in learning activities. Students were asked to report on what they felt while in the activities. Considering that not all students were participating in similar activities in the same time and not all of them were active in reporting their feelings, we end up with unstructured and uncontrolled data which does not reflect a systematic and regular longitudinal data. Add to this issue the complexity of the arousal level itself. Most of longitudinal data analysis assume the linearity (the outcome variable changes positively/negatively with the predictors). Clearly that does not apply to our case, since the arousal level fluctuates over time.**My questions:**

Can you please specify a useful resource (e.g., book, article, forum of experts) to analysis unbalanced panel data?

Do you have yourself any idea on how one can handle unbalanced data analysis?

Hey Colleagues,

I hope you are healthy and safe during this quarantine.

For any machine learning model, we evaluate the performance of the model based on several points, and the loss is amongst them. We all know that an ML model:

1- Underfits, when the training loss is way more significant than the testing loss.

2- Overfits, when the training loss is way smaller than the testing loss.

3- Performs very well when the training loss and the testing loss are very close.

**My question is directed to the third point. I am running a DL model (1D CNN), and I have the following results: (Note that, my initial loss was 2.5)**

**- Training loss = 0.55**

**- Testing Loss = 0.65**

Nevertheless, I am not quite sure if the results are acceptable. Since the training loss is a bit high (0.5). I tried to lower the training loss by giving more complexity to the model (Increasing the number of CNN layers and MLP layers); however, this is a very tricky process as whenever I increase the complexity of the architecture, the testing loss increases, and the model easily overfits.

**Finally, to say that our model performed very well, should we get a low training loss (say less than 0.1) or my case is still considered good too?**

I look forward to hearing from you,

Thanks and regards,

I am fairly certain that convex algorithms are faster/ more efficient then non convex algorithms.

But so far I was unable to find a source for this claim. Can you help me find a trustworthy source?

Any algorithm ? Many Citations still go unrecognized !! But are still atleast entered or feeded manually by respective author. Why doesn't it use a universal, simple method to update its citations? I am also aware of the fact about the very large database of Google Scholar! I may be ignorant !!!

why it is better to implement the same algorithm but without loops?

Example:

If I have an array of values, I want to substitute them in a specific equation.

I can do that by 2 ways:

1) loop for each value of array.

2) I can write all the arrays manually (if the size of array is small) under each other then write the mentioned equation then run the program for calculation.

SO why it is better to avoid 1) and follow 2)

Does the loop have a bad effect on controllers ?? does they cause additional lag than manual technique ?

I kept thinking about that, I noticed that there are always a difference in performance but I do't know why. 2 is always better than 1 !!

DIssipation is the reflection of irreversibility in the processes of nature. How does it reflect itself in various laws an principles ?

I am a materials science undergrad, interested to know the algorithms for numerical integration of equation of motion in computational materials science, like molecular dynamics. It is said that, time-reversal symmetry is essential for such simulations, while classic integration schemes like Trapezoidal, simpsons or weddle methods handle previous and next time step differently. So verlet algorithm is used instead.

Position verlet indeed adds previous and past timesteps and maintains time-reversal symmetry. But velocity verlet doe not. Why is time-reversal symmetry not important for velocity? is it because time reversal symmetry is meaningful only for position and its even derivatives, as in newton's law of motion?

My knowledge on Numerical analysis is only of introductory level, and i have not deeply studied Lagrangian, chaos theory, group theory or hyper-dimensional geometry yet.

Hi,

We developed a subpixel image registration algorithm for finding sub-pixel displacement and I want to test that against existing methods. I have compared that with subpixel image registration algorithm by Guizar et al. and also the algorithm developed by Foroosh et al. Does anyone knows any other accurate algorithm for subpixel image registration (preferably with an open-source code)?

Thank you.

Dear all,

I am interested in the idea of using AI algorithms in analysing civil structures. Could I connect to any researchers with the same interest? I appreciate all the advice!

Regards,

Hoan Nguyen

We have some research works related to Algorithm Design and Analysis. Most of the computer science journals focus the current trends such as Machine Learning, AI, Robotics, Block-Chain Technology, etc. Please, suggest me some journals that publish articles related to core algorithmic research.

I'm trying to find an efficient algorithm to determine the linear separability of a subset X of {0, 1}^n and its complement {0, 1}^n \ X that is ideally also easy to implement. If you can also give some tips on how to implement the algorithm(s) you mention in the answer, that would be great.

According to Shannon in classical information theory H(x) > f(H(x)) for an entropy H(x) over some random variable x, with an unknown function f. Is it possible that an observer that doesn’t know the function (that produces statistically random data) can take the output of the function and consider it random (entropy)? Additionally, if one were to use entropy as a measure between two states, what would that ‘measure’ be between the statistically random output and the original pool of random?

Here I wondering how to analyze the EEG signal in order to detect lie. I hope to study about creating machine learning model using pre-processed EEG signals to detect lie. Help me on how can i pre-process EEG signal. what algorithm to use

Is any polynomial (reasonably efficient) reduction which makes possible solving the LCS problem for inputs over any arbitrary alphabet through solving LCS for bit-strings?

Even though the general DP algorithm for LCS does not care about the underlying alphabet, there are some properties which are easy to prove for the binary case, and a reduction as asked above can help generalizing those properties to any arbitrary alphabet.

Is there a standard formula in getting the evaluation fitness in the Cuckoo Search algorithm? Or can any formula be used in evaluating the fitness?

Hope you can help me.

Dear scientists,

Hi. I am working on some

**dynamic network flow**problems with**flow-dependent transit times**in**system-optimal**flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of**realistic benchmark problems**. Could you please guide me to access such benchmark problems?Thank you very much in advance.

Consider m data points in n dimensional data space, and we want to fit a minimal hyper sphere, which covers all these data points .

to find center and radius of such minimal hyper sphere is NP task. I am interested in efficient algorithms, which may be either exact algorithm or approximation algorithm or randomized algorithm.

Vector Autorgressive (VAR) models are widely used to study the response to shocks in independents variables, but what if you want to study more than one shock and each shock has different effect, if you think the effect is temporary and want to estimate the time length of each shock, I mean how long it takes for the response variable to return to normal state at each shock, you want also to see if the effect is instant or lagged, which models could be used?

Could you suggest models other than VARs that perfectly fit for this case?

Thanks in advance

I read some scientific papers and most of them are using data dependency test to analyse their code for parallel optimization purpose. one of the dependency test is that, Banerjee's test. Are there other tests that can provide better result testing data dependency? and is it hard to do a test on control dependency code, and if we can, what are some of the technique that we can use?

Thank you

In multi-objective optimization, how to calculate the average inverted generational distance (IGD) value of each generation over

*independent runs? I need to get something similar to the attached picture.***N**I have tried CHOPCHOP and E-CRISPR-TEST but they gave me different results in terms of genome off-targets for the same sequence! which online tool is the most trustworthy?

I have two data sets (sample size 300). One contains different characteristics of a certain group of people like education, skills, age, income and so forth. Other set contains different job requirements like education, skills, and job related problems and so on. I have classified this two data sets separately into different groups using K means cluster analysis in SPSS. I want to match different groups from this two data sets for finding which group of people is eligible for which types of jobs. Would you please recommend me a suitable matching algorithm for this analysis?

I am not an expert in programming language. I am looking for a software like spss or excel to do this.

Thanks.

The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm?

Can you direct me to new research in this subject, or for approximate farthest point AFP?

I'm implementing this algorithm using excel to build my grid and vba programming, I am solving the problem in terms of mass flow (air) and junction (nodes) pressures. I am not able to get convergence since the presssures in the nodes always have a deltaP.

Dear experts,

Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.

Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.

The question is that:

How could we send D units of flow from s to t through these paths in the quickest time?

Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.

for Photovoltaic based water pumping system

I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.

what makes the difference between the real and the inertial time of a numerical computation of physical phenomena mean?

I have a data set that contains in-game players actions and interactions (stored as metrics). I want to create a recognition system that contains a set of predefined rules and takes as inputs the collected metrics. As an output the system will determine the player type. How to categorize players based on the collected data and the predefined rules? What are the possible algorithms/approaches?

PS: The data set contain only 23 entries (not enough to train the recognition system).

Many thanks.

I have gone through definitions of the term Asymptotic, and at one place I found the following:

A 'Line' that continually approaches a given curve but does not meet it at any finite distance.

I want to ask whether the above definition is OK? If yes then I have a question that we see that in the representation of Big Oh Notation the two functions f(n) and g(n) are represented by curves and NOT by line. So how could we take the above definition to understand the Asymptotic Notations?