Science topic

Graph Algorithms - Science topic

Explore the latest questions and answers in Graph Algorithms, and find Graph Algorithms experts.
Questions related to Graph Algorithms
  • asked a question related to Graph Algorithms
Question
2 answers
Hi, RG community! I am new to network analysis and I am currently facing a challenge with coding, processing, and quantifying networks in a hierarchical scheme. In this scheme, nodes pertain to differing hierarchical ranks and ranks denote inclusion relationships. So, for example if node “A” includes node “Z”, it is said that “A” is “Z”’s parent and “Z” is “A”’ daughter. However, a rather uncommon feature is that nodes at different ranks of the hierarchy can relate in a non-inclusive fashion. For example, node “A” parent of “Z” may have a directional link to “Y”, which is “B”’s daughter (if “A” were directionally linked to “B”, then it could be said that “A” is “Y”’ aunt). Here is a more concrete example to illustrate the plausibility of this scheme: “A” is a website in which person “Z” is signed in (inclusiveness; specifically, parentship); website “A” can advertise banners of website “B” (siblingship) or recommend to follow a link to person “Y” profile in website “B” (auntship).
OK. So, in the image below (left top panel) I present a graphical depiction of this rationale. For simplicity, a two-rank hierarchy is used, where gray and red colors denote higher and lower hierarchies, respectively. The image displays siblingship, parentship, and auntship links. My first approach to coding this network scheme was to denote inclusiveness as one-directional relationships (green numbers) and simple links as symmetrical (two-way; brown numbers) relationships (see table in right panel). However, I soon realized that this does not reflect what I expected in networks’ metrics. For example, I am mainly interested in quantifying cohesiveness and the way I coded the network in left top panel entails something like the non-hierarchical network depicted at the left bottom panel. In short, I am not interested in the directionality of the links but in actual inclusiveness. To my mind, the network in the top panel is more cohesive than that in the bottom panel but my coding approach does not allow me to distinguish between them formally.
The solution conceived in the interest of solving this problem was to stipulate that a relationship between any pair of nodes implicates a relationship of each with all of the other’s descendance. This certainly yields, for example, the top network being more cohesive than that in the bottom, which is in line with my goals. However, this solution is not at all as elegant as I would have hoped. Can anyone tell if there is a better solution? Maybe another way to code or an R package allowing for qualitatively distinct relationships (not just one-way or two-way). Thank you.
Relevant answer
Answer
Instead of listing A/B on the same level (in matrix) as Y etc, what you say you want seems (to me) a graph G = [X,Y,Z,W] with 2 subgraphs A, B.
A simple representation then is a matrix 4x4 and a datastructure to test if a node is in a subgraph or not (dictionary/set).
If you want to model edges between A/B, you can define a new 2x2 matrix describing those.
Note that for large datasets adjacency matrices will scale poorly, so either a adjacency list or sparse matrices can be useful.
There is a fast datastructure for this kind of problem (if A/B are disjoint)
If you find you need multiple edges, consider hyper/multigraphs
Hope this helps,
Ben
  • asked a question related to Graph Algorithms
Question
1 answer
Fragmentation trees are a critical step towards elucidation of compounds from mass spectra by enabling high confidence de novo identification of molecular formulas of unknown compounds (doi:10.1186/s13321-016-0116-8). Unfortunately, those algorithms suffer from long computation times making analysis of large datasets intractable. Recently, however, Fertin et al. (doi:10.1016/j.tcs.2020.11.021) highlighted additional properties of fragmentation graphs which could reduce computational times. Since their work is purely theoretical and lacks an implementation, I'm looking to partner up with someone to investigate and implement faster fragmentation tree algorithms. Could end up being a nice paper. Anyone interested?
  • asked a question related to Graph Algorithms
Question
4 answers
I have more than 100 full connected networks, and their edges were undirectional and weighted, how to quantify and classify those networks based on their node and edge attributes? Each network represents one patient, we hope to unsupervised classify those networks.
This question was on graph Isomorphism or graph classification to binary classify the health and disease patients. Our main focus is mainly on edge attributes. Any related survey or algorithm integrating edge attribute would be helpful! Thank you for your help.
Relevant answer
Answer
I guess it´s worthy calculating the eigenvectors for each network, then plot them and see how similar they are.
  • asked a question related to Graph Algorithms
Question
10 answers
Hi,
I have a graph as in attached figure. I have to extract connected nodes from graph based on edge weights. If edge weight is less than a certain threshold, we have to consider that there will be not be any connectivity between that nodes. I have attached expected subgraph with this mail. Is there any efficient algorithms available to extract these type of nodes? In this attached sub graphs, all nodes are grouped if the edge weights are above 1
Relevant answer
Answer
I posted a presentation about Eulerian paths. Take a look at it.
(PDF) Topology and Structure of Directed Graphs - DNA Sequencing
Another presentation about "Seven Bridges of Königsberg" is in the making.
  • asked a question related to Graph Algorithms
Question
2 answers
In Linked Open Data there are many links between the resources and some resources are linked directly and others are linked indirectly through a third resource. some similarity measures of resources depend on a number of direct and indirect links and I am asking If there is any evidence or justification to depend on to give more importance (weight) to direct links over indirect links to find the similarity.
Relevant answer
Answer
Andre Valdestilhas Thank you for the answer. the properties that you mentioned are not always available.
An example to my question, If I need to know the related or similar movies to the initial movie <http://dbpedia.org/resource/The_Terminator>
there are direct links by WikiPageWikiLink property to other movies and other indirect links via a middle resource. so maybe give importance to existing of such direct links over indirect links better than depending on both links equally.
thanks.
regards.
  • asked a question related to Graph Algorithms
Question
11 answers
There are a lot of works that search for the subgraph of a given graph, what is the efficient method that can check if a given subgraph is connected?
Relevant answer
Answer
The most common technique is to use DFS or BFS.
  • asked a question related to Graph Algorithms
Question
9 answers
Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.
Then we have a matrix of terms v(i, j)...
Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.
Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.
What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?
What can we observe, say, define on such matrices of sets?
If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.
Ref:
Relevant answer
Thank you for sharing this Question
  • asked a question related to Graph Algorithms
Question
3 answers
I designed a centrality metric to distinguish most connected nodes. The model is designed based on degree distribution of nodes. The first result implemented without edge weights show node I ranked over nodes B, D, E in figure 1 although they have the same number of connections.
The second result implemented with edge weights show that node I still ranked higher than nodes B, D, E in figure 2. I had expected that taking the weights into consideration will give more importance to nodes connected to edges of higher weights. But this is not the case.
My observation is that sometimes when nodes have the same number of connections, the model seems to penalize nodes connected to edges of higher weights and reward nodes connected to edges of lesser weights.
I would like to know if there is a natural occurrence of something like this in real world networks or is there something wrong with my model?
Relevant answer
Answer
Very good!
  • asked a question related to Graph Algorithms
Question
1 answer
I am working on limiting the electric assist on a hybrid SAV within its safe limits.
To make the maximum articulation angle limit at a certain speed, I need to know the dependence of both (maybe like a graphical plot). Could someone offer some assistance on the same?
Thank you in advance.
Relevant answer
Answer
Good question, follow
  • asked a question related to Graph Algorithms
Question
9 answers
I am facing this problem in one or other way during my research(Graph labeling>Magic labeling). What will be the best logic to make a computer programme? I have tried this but it doesn't work properly for large values of m,n and k.
Relevant answer
Answer
Hi Siddhant Trivedi , I will try to answer your questions as comprehensively as I can. This might be a slightly longer answer, so please excuse me and read through it patiently.
In all the answers discussed here(both what has been suggested and what you have implmented), what we are essentially doing are the following steps.
a) Creating a list of all the possible ways in which a set of n numbers S can be combined in smaller sets of m numbers.
b) Evaluating the sum of all these smaller lists and see if these are equal to k.
c) Creating a separate list which satisfies the sum condition.
The most memory is consumed in storing the first list that is created. For any given set of numbers S of size n, the number of combinations of size m that can be generated from it without replacements and where the order doesn't matter is given as n!/((n-m)!*m!). (Please review the topic Combinations without repetition in Permutations and Combinations. https://www.mathsisfun.com/combinatorics/combinations-permutations.html)
To illustrate with an example, if S = {1,2,3,4,5} and m = 3, then,
list(combinations(S,m)) = [(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5), (2, 4, 5), (3, 4, 5)], ie, it has 10 components.
By computing it analytically from the formula, it is 5!/((5-3)!*3!) = 5!/(2!*3!) = 120/(2*6) = 10
The same way the number of components in list(combinations(S,m)) can be computed analytically for any given n and m. For instance if n is 1000 and m is 3, then the number of components in the list of combinations that will be generated will equal 1000!/(997!*3!) = 166167000 = 1.6E8
Now, I'll address your specific questions.
(1) Is it possible to develop this algorithm so that it can be run with low memory?
Definitely yes. For that the combinations function should be rewritten and you may not be able to use the inbuilt combinations function of python. When n is large, the problem that you have is that the program proceeds by first creating the list of all the combinations and then evaluates whether the sum of the components is equal to k. Instead what you should be doing is evaluate the sum while the different possible combinations are being created one by one. For that you will have to define an equivalent of the combinations function yourself where the additional condition of the sum is also included while the different combinations are being generated.
(2) If I am writing an algorithm, how do I know, how much memory it will be consumed by RAM?
It is not an exact calculation, since compiler optimizations and other things can play a factor. Integer values in Python occupy 4 bytes. So the size of an integer list having 1E8 components is equal to approx 4E8 bytes or 4E5 KB or 400MB
(3) As you said if the value of n=1000 and m=3 then the system occupying approx 400MB RAM. As far as I have observed, it will also depend on the values of m and k. What happen if we increase m and k to a certain range?
The value of k should not in general have an appreciable difference in the memory usage, though it is possible. The major difference will be caused by the value of m and it can be computed as explained above.
(4) What happens if we take n=1000, m=500 and k=1,25,250=sum of first 500 natural numbers or we may take the value of k in the range (125250, 375250)?
If n=1000 and m=500, then the number of combinations that are possible will be 1000!/(500!*500!) = 2.7E299. Storing such a huge list is practically not very feasible.
Instead what can be done is that you can create the combinations one by one and simultaneously evaluate the sum and store only the combinations that satisfy the sum condition.
  • asked a question related to Graph Algorithms
Question
3 answers
The problem that I am trying to solve is to place the vertices of a undirected, unweighted graph onto a straight line such that the arrangement has minimum bandwidth(length of the longest edge of the arrangement)
Each vertex of the graph has different length(integer) which means that the solution for a minimum bandwidth problem might not be the optimal solution for the problem I am trying to solve.
I was trying to see if I can model this problem as a quadratic assignment problem or job scheduling problem but so far I have not been successful.
Does this problem fall into the category of any standard combinatorial optimization problem?Is there an approximation algorithm for this (Minimum graph bandwidth problem is NP hard)?
  • asked a question related to Graph Algorithms
Question
8 answers
What are the ways to transfer a graph from one Relation space to a Euclidean space with less time complexity? although there are some ways solution (such as signal process, spectral method ), they have a high time complexity.
Relevant answer
Answer
Dear Kamal,
maybe node2vec can be useful for your application: https://snap.stanford.edu/node2vec/
Kind regards,
Djordje
  • asked a question related to Graph Algorithms
Question
4 answers
Difference of the model design.
It seems the difference is that GraphSAGE sample the data.
But what is the difference in model architecture.
Relevant answer
Answer
The main novelty of GraphSAGE is a neighborhood sampling step (but this is independent of whether these models are used inductively or transductively). You can think of GraphSAGE as GCN with subsampled neighbors.
In practice, both can be used inductively and transductively.
The title of the GraphSAGE paper ("Inductive representation learning") is unfortunately a bit misleading in that regard. The main benefit of the sampling step of GraphSAGE is scalability (but at the cost of higher variance gradients).
  • asked a question related to Graph Algorithms
Question
2 answers
I want know about an algorithm or formula to find the asymptote from coordinates obtained from Machine Learning. Like the ML will always give points precise & closer to the Asymptote if I ran it, but won't ever reach the asymptote value. The normal methods are only created for humans like: limit tends to zero -> 0 and by graphing but there is not an algorithms way for computers which can compute an asymptote. If anybody knows about this can give me a direction on this topic.
Relevant answer
Answer
The approach creates a hyper cube around test point effectively.
  • asked a question related to Graph Algorithms
Question
3 answers
Actuality I work in computer vision, specifically on a problem known as "scene graph modeling". This problem aims to convert an image I in a graph G=(V,E) where the nodes V represent the objects (and the features) in scene and the edges E the relationships between objects. An interesting paper on this topic is Graph R-CNN for Scene Graph Generation[1] (Note that unlike of only to detect the objects in an image, the scene graph aims to capture the contextual information of image). A graph is a mathematics structure rich in information, and it would be very interesting to integrate graphs in a machine learn approach. In order to achieve this task is necessary to transform a graph in a vector representation. Some works that intend solve this problem are the following:
- SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS [2]: The problem with this algorithm is that assumes a fix number of nodes. After training, this algorithm take a graph G=(V,E) as input  (whit N nodes, that is, |V|=N) and outputs a fixed vector representation.
- graph2vec: Learning Distributed Representations of Graphs [3]: This algorithm is flexible due to permit build a vector representation from a graph G without restrict the number of nodes. However, it needs to know the whole space graph. That is, given a graph set G={g1,g2,...,gi,...,gm}, where gi is the i-th graph, this algorithm builds a vectorial representation V={v1,v2,...,vi,...,vm}, where vi is the i-th vector associated with the graph gi. This algorithm is originally proposed to text analysis, where the features in nodes are of low dimension, I do not know if it can work using high dimension features in nodes. 
I would like to know if there is another simple algorithm that allows me to convert any graph into a fixed vector representation.
Relevant answer
Answer
Before using a specific algorithm, I would suggest looking at the various mathematical methods of transform as different methods will provide different accuracy and information on the graph in question. I suggest you have a look at Fourier transformations (https://arxiv.org/pdf/1601.05972.pdf) and you will find various algorithms which will aid your purpose with this method. Best
  • asked a question related to Graph Algorithms
Question
2 answers
Dijkstra's algorithm performs the best sequentially on a single CPU core. Bellman-Ford implementations and variants running on the GPU outperform this sequential Dijkstra case, as well as parallel Delta-Stepping implementations on multicores, by several orders of magnitude for most graphs. However, there exist graphs (such as road-networks) that perform well only when Dijkstra's algorithm is used. Therefore, which implementation and algorithm should be used for generic cases?
Relevant answer
Answer
Maleki et al. achieved improvements over delta-stepping:
Saeed Maleki, Donald Nguyen, Andrew Lenharth, María Garzarán, David Padua, and Keshav Pingali. 2016. DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem. In Proc.\ 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 32, DOI: https://doi.org/10.1145/2925426.2926287.
At the end of the Abstract they write:
"Our results show that DSMR is faster than the best previous algorithm, parallel [Delta]-Stepping, by up-to 7.38x".
Page 9, col 1, line -3:
"Machines: Two experimental machines were used for the evaluation: a shared-memory machine with 40 cores (4 10-core Intel(R) Xeon^TM E7-4860) and 128GB of memory; the distributed[-]memory machine Mira, a supercomputer at Argonne National Lab. Mira has 49152 nodes and each node has 16 cores (PowerPC A2) with 16GB of memory."
Best wishes,
Frank
  • asked a question related to Graph Algorithms
Question
2 answers
Quantum computers are known to perform extremely well on a limited number of problems at this time. For real applications, such as path planning and search in graph analytics, are quantum computers expected to perform well? What are the advantages and disadvantages of using such machines for such applications?
Relevant answer
Answer
1-If your Problem can be modeled and can be solved through Grover's algorithm then the problem will be favorite for Quantum Computer like breaking of cryptography.
2- The Simulation Problems of chemical structure and the discovery of new organic compounds will be the major application of Quantum Computer.
3-Teleportation through Quantum entanglement will be an other application of QCs.
  • asked a question related to Graph Algorithms
Question
4 answers
Dijkstra's algorithms performs well sequentially. However, applications require even better parallel performance because of real-time constraints. Implementations such as SprayList and Relaxed Queues allow parallelism on priority queue operations in Dijkstra's algorithm, with various performance vs accuracy tradeoffs. Which of these algorithms is the best in terms of raw parallel performance?
Relevant answer
  • asked a question related to Graph Algorithms
Question
8 answers
I am working on the  construction of Barnette graphs for given diameter. I would like to know the reason why many cubic 3 connected planar , (not  a bipartite)  are  both non-Hamiltonian and Hamiltonian graphs. I found a unique property of those Hamiltonian graphs. I need the latest results related to my question.
Relevant answer
Answer
If any one property of Barnatte graph is dropped it is non hamiltonion
  • asked a question related to Graph Algorithms
Question
5 answers
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Relevant answer
Answer
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
  • asked a question related to Graph Algorithms
Question
3 answers
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
Relevant answer
Answer
Level-Synchronous Parallel Breadth-First Search Algorithms
  • asked a question related to Graph Algorithms
Question
4 answers
Graph algorithms such as BFS and SSSP (Bellman-Ford or Dijkstra's algorithm) generally exhibit a lack of locality. A vertex at the start of the graph may want to update an edge that exists in a farther part of the graph. This is a problem in graphs whose memory requirements far exceed those available in the machine's DRAM. How must the graph be streamed into the machine in this case? What are the consequences for a parallel multicore in such cases where access latency and core utilization are of utmost importance?
Relevant answer
Answer
You could combine clusters of vertices to super-vertices, find a route through them and delete all vertices of the original graph whose vertices are not contained in a visited super-vertex. Then you proceed with a smaller graph and smaller super-vertices.
Regards,
Joachim
  • asked a question related to Graph Algorithms
Question
4 answers
Since 2013, the interest in multilayer networks has been growing quickly. Some studies proposed new algorithms to calculate modularity and detect modules in multilayer networks of different kinds, considering their original structure (intralayer and interlayer links). However, do you know if some of those algorithms have been implemented in R packages? Or stand-alone software? Thank you!
Relevant answer
Answer
NetworkX is also a good tool
  • asked a question related to Graph Algorithms
Question
8 answers
Given a graph, I need to find a vertex (or set of vertices) that needs to be remove from this graph in order to reduce it's chromatic number.
Relevant answer
Answer
Finding "critical" nodes or edges is hard for both NP and co-NP. So any exact algorithm for your problem is going to take exponential time in the worst case. But there might exist an algorithm that works reasonably well in practice... depending on the structure of your instances.
  • asked a question related to Graph Algorithms
Question
5 answers
For example, in a large program, there are two variables, m and n. I want to know if the two variables will affect each other. Is there a classic study in this area?
Thanks!
Relevant answer
Answer
Are the values of the two program "variables" derived from input? In that case the question of their interdependence is a statistical matter involving the nature of the variables as measures of some kind. The previous suggestions are applicable.
If you are talking about dependencies of two variables in a computer program -- that is, dependencies in the computation itself, that's a different kind of thing. If there are inputs that add variation, you are back to a version of the first case. If not, I suppose you need to find something that develops dependency graphs for the programming language used.
There has been such analysis available in spreadsheet software, for example.
There are also some classic approaches from analysis of programming language grammars and other problems. These are not statistical in nature. This involves creation of dependency matrices around immediate dependencies and then derivation of the transitive closure. That works even for mutual dependency. This creates either a sort of Boolean dependency (yes or no) or a shortest-path dependency, and there are some "classic" solutions. I think the greatest effort may be deriving the data for such analysis if the dependency is not obvious from inspection.
  • asked a question related to Graph Algorithms
Question
2 answers
As you can see this pics below I have a network. I just label the edges but I want to associate them with values.
Then I want to run this:
Every edge has its own value (like this: /1. node to 2. node/ edge = 90% or 0.90)
This value means probability so, I want to find the best path from KH to VC. So the highest probabilty is the best path but maybe not the shortest, because of this:
for example:
START-A-B-C-FINISH = 0.70*0.90*.0.90*0.80 = 0.45
START-A-E-FINISH = 0.70*0.40*0.95 = 0.27
So the first way is better then the last.
Any idea what program should I use or how to make this algorithm?
Relevant answer
Answer
Does negative probability exist at all? ;)
Gephi is dedicated to social network analysis and as far as I know, there are no plugins supporting this kind of computation.
We should find all paths from start to finish, for each calculate product and choose the minimum one. It is quite simple in terms of describing the algorithm but quite complex in terms of computation, especially for extensive graphs.
But as I mentioned Gephi here is useless, unless you write your own plugin. But if you have only one case to solve it would be easier to use e.g. python.
But the answer for your very first question is that you can assign values to edges in the Gephi, because the Gephi supports also weighed graph. Look at the "Data laboratory" tab, switch to edges and you will see the weight column that is able to store float values. This is exactly what you are looking for.
  • asked a question related to Graph Algorithms
Question
1 answer
The different possible ordering between several independent transitions in Transitions Systems, is it the unique cause of the State Explosion Problem ?
Relevant answer
Answer
I think there is also the case when for the same input/condition, multiple transitions can fire, so each case should be separately considered.
  • asked a question related to Graph Algorithms
Question
10 answers
Dear experts,
Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.
Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.
The question is that:
How could we send D units of flow from s to t through these paths in the quickest time?
Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.
  • asked a question related to Graph Algorithms
Question
2 answers
I have any 2k-regular (multi) graph. I know that there exists a 2-factorization. The question is not the existence but the complexity. Can I find such a 2-factorization in polynomial time?
Relevant answer
Answer
If there are no restrictions on the number of cycles in the 2-factor, then for simple graphs, you can draw analogies with the assignment problem, which is polynomially solvable.
  • asked a question related to Graph Algorithms
Question
3 answers
Given a tree or a graph are there automatic techniques or automatic models that can assign weights to nodes in a tree or a graph other than NN?
Relevant answer
Answer
In the case of Euclidean graphs you can use the Euclidean distance between nodes. You can also use random weights. Depending on the application you can use appropriate weights...
  • asked a question related to Graph Algorithms
Question
3 answers
I am interested in calculating the lowest common ancestors of several nodes in a directed graph. I tested the findlcas method proposed by the NaiveLcaFinder class of the jGrapht project (https://github.com/jgrapht/jgrapht/blob/master/jgrapht-core/src/main/java/org/jgrapht/alg/NaiveLcaFinder.java) but I found it very difficult to apply it for several nodes. Could you please help me on this point? or give me some suggestions
Relevant answer
Answer
This is what i am trying to do. there is several cases to take into account....
thank you very much for your answer.
  • asked a question related to Graph Algorithms
Question
2 answers
I am searching for an implementation of an algorithm that constructs three edge independent trees from a 3-edge connected graph. Any response will be appreciated. Thanks in Advance.
Relevant answer
Answer
Dear Imran,
I suggest you to see links in subject.
-EXPLORING ALGORITHMS FOR EFFECTIVE ... - Semantic Scholar
-Graph-Theoretic Concepts in Computer Science: 24th International ...
-Expanders via Random Spanning Trees
Best regards
  • asked a question related to Graph Algorithms
Question
11 answers
Given an undirected(weighted) graph depicted in the attached diagram which is a representation of relationships between different kinds of drink, how best can I assign weight to the edges? Is there a technique that I can use?
Relevant answer
Answer
Use Structural Equation Modeling (SEM).
  • asked a question related to Graph Algorithms
Question
4 answers
I'm currently working on a problem with natural text as input and a structured graph as output. For simplicity, suppose we have a natural text story and we want to produce a graph that shows the relationships between the characters in the story. This is one of the example instances of the problem, but in general, it can be any mapping from natural text to graphs.
Now, I can come up with heuristics and domain-specific rules for building such a graph for a given problem. I could, for example, apply NER and POS-TAG and craft some heuristics to extract patterns where a given entity performs an action (a verb) on another entity. Then, of course, comes all the known and unsolved issues of anaphoras and omitted context and so on, but we can build on top of this "classic" NLP framework.
However, I was wondering if there is an alternative "pure" machine learning approach (perhaps using neural networks) that attempts to automatically create a graph given some natural text. I don't presuppose that such an approach would be "better" in any sense, but I'm intrigued by the possibility. From my (short) previous experience in neural networks, the main issue I see here is how to design a network architecture that can output an arbitrary graph with arbitrary node labels and edge labels. I know of architectures for sequence to sequence modeling, and architectures that can take arbitrary graphs as input, but haven't found anything that outputs a more complex structure like a tree or an arbitrary graph.
Can someone point to a similar research where a neural network has been designed to output an arbitrary graph structure?
Relevant answer
Answer
This is a coarse solution but maybe taking the output as a sequence of the same shape of the input but adding probabilities. So for a input of shape N, the output layer O could be of shape [N,N+1], where Oij means the probability of the word i of be child of j in the graph, the N+1 is due the fact that a word could not be part of the graph (semantically useless word), Oii, means that i is a root, then computing the argmax for each word and build the graph. This is a theoretically idea, the problem here is to bind it to the recurrent neural network. To take the output graph from previous layers and the next word, and make a correct graph prediction is a big issue.
  • asked a question related to Graph Algorithms
Question
3 answers
Looking for existing / proposed compression algorithms / methods used in handwritten document images for storage
And also requires details about the compression (formats) of handwritten document images having following scenarios:
1. Document having only text (Binary colours).
2. Document also consisting tables.(Binary colours)
3. Document also consisting images (Binary colours)
4. Document also consisting images and tables (Multi colours)
Will be grateful for any suggestions / comments.  
Relevant answer
Answer
NOBE is an alternative algorithm for binary compression. However it is very slow and computational intensive.
  • asked a question related to Graph Algorithms
Question
5 answers
Hi everyone. I'm currently trying to assess the ability of human volunteers to cluster a set of images into a fixed number of clusters according to perceived visual similarity (the images are self-organized maps of breast tumours' gene expression). In order to do that I build an averaged identity matrix (with binarized similarity: 1 is belonging to the same group, and 0 not belonging), which I later transform into a graph and perform the partition. I'm stuck in the part in which I contrast human ability to an algorithm. I was wondering if there's an algorithm or software which transforms a set of images into an undirected wighted graph, where the weight of the edges represents the similarity between images (nodes). This would be my best scenario, because I would be able to perform the same partition method and compare. Else, I'd still appreciate some suggestions for image clustering according to visual similarity. Thanks in advance.
Relevant answer
Answer
Actually, it depends on the size of similarity matrix. when you are trying to cluster the phenotypes inside of the images, your similarity matrix is not following the Euclidean distance , and normally the distance are geodesic. if the number of cluster are small enough , you can go a head and use SPECTRAL CLUSTERING with an RBF kernel, otherwise, you should consider that in spectral clustering with a large dimension , exhaustive O(n2) pairwise comparison between images is computationally intractable. In this case you can go a head and test other methods like t-SNE or UMAP. But, considering your explanation, it seems that for your specific application SPECTRAL CLUSTERING works well.
  • asked a question related to Graph Algorithms
Question
11 answers
which one is the best for finding the shortest path in a graph
Relevant answer
Answer
Dear Edward,
Dijkstra's algorithms computes the shortest path from a given source node in time O(n^2) in the basic implementation, and time O(m log m) if you use priority queues, where n = |V| and m = |E| are the sizes of the vertex (V) and edges (E) set respectively. In the case of spare graphs is much better the priority queue variant (as m < n^2). It uses O(n) extra memory.
On the other hand, Floyd-Warshall computes the shortest path between every pair of nodes in time O(n^3). It uses O(n^2) extra memory.
If you need to compute a the shortest path between a given pair of nodes, use Dijstra's. In practice, Floyd-Warshall is useful for small graphs as it uses a lot of memory for the computation.
Best regards, and sorry for the delay in the answer.
rapa
  • asked a question related to Graph Algorithms
Question
7 answers
I have some data and I want to illustrate some graph in R. I have installed "sm" package but when I want to draw the graph, I receive the following:
could not find function "vioplot"
Many thanks.
Relevant answer
Answer
Hello Radu,
You need the ggplot2 package, just install it and then try this tutorial for ilustration:
  • asked a question related to Graph Algorithms
Question
2 answers
I was wondering what algorithms can be used to find K-Minimum Cost paths across any two nodes on a directed graph with every edge given a cost (e.g. K=1 would be same as using Dijkstra).
Relevant answer
Answer
  • asked a question related to Graph Algorithms
  • asked a question related to Graph Algorithms
Question
2 answers
I am trying to get my nomenclature correct.
I personally use backtracking searches that enumerate a finite (though large) set of multi-digraphs to prove chains either exists or do not exists of a certain length. The multi-digraphs can't contain certain subgraphs as well has having limits on edges and vertices that make the search set finite.
Relevant answer
Answer
Yes the graphs can have parallel edges that both start and end at the same vertices.
By 'chain' I mean an addition chain. That's just a list of integers with the typical properties of an addition chain. We try to prove an addition chain exists of a specific length and if that fails we try a larger length.
By 'certain subgraphs' I mean that certain portions of the graph can't contain certain things expressible as a subgraph of the original graph. For example you may not have 4 parallel edges but you can have 3 or 2 and of course 1 (which has no parallel edges). There are other excluded subgraphs as well. The subgraphs excluded are directed as well and edge direction must match.
Yes by limits there are limits on the number of edges that may have the same source vertex. There are more complicated limits like a source vertex can't have more than one distinct set of parallel edges emanating from it. So for example source vertex A could have two or three parallel edges to vertex B but that would rule out anything but a single edge from A to vertex C.
  • asked a question related to Graph Algorithms
Question
4 answers
Edges are given in form of Xij, which denotes whether there is edge in between i'th and j'th vertex. I am solving integer optimization problem and want to add this constraint to it.
Relevant answer
Answer
Thank you , yeah It will work.
I recently got another solution and wanted to share it.
Constraint 1) ∀ i   ΣXij = K  - 1
                  2) ∀i, j, k   Xij + Xjk + Xik not equal to 2
  • asked a question related to Graph Algorithms
Question
11 answers
Nowadays complex network data sets are reaching enormous sizes, so that their analysis challenges algorithms, software and hardware. One possible approach to this is to reduce the amount of data while preserving important information. I'm specifically interested in methods that, given a complex network as input, filter out a significant fraction of the edges while preserving structural properties such as degree distribution, components, clustering coefficients, community structure and centrality. Another term used in this context is "backbone", a subset of important edges that represent the network structure.
There are methods to sparsify/filter/sample edges and preserve a specific property. But are there any methods that aim to preserve a large set of diverse properties?
Relevant answer
Answer
I think it really depends if the properties are local or global. In anyway, it seems to be controllable at multiple scales of coarseness. We introduced a multilevel sparsification by algebraic distance, so it partially addresses this question. Please take a look at this paper "Single and multi-level network sparsification by algebraic distance".
  • asked a question related to Graph Algorithms
Question
5 answers
hello, I wrote a program that works on a graph containing 36692 nodes.The program should find all the shortest path in a graph between each pair of nodes. the graph is undirected and unweighted and sparse. I can not use floyd-warshall algorithm because the complexity of it is o(v^3). so, I used Dijkstra and repeat it |v| times, so the complexity is o(VE+V^2LOGV).since graph is sparse (|E|<<|V|), dijkstra is better than floyd-warshal algorithm. but the running time is long again(more than 4 hours). I used the parallel loops to reduce the time, but it was not usefull. is there any algorithm with less running time that can be run on a big Dataset?
Relevant answer
Answer
Since you say that "the graph is undirected and unweighted and sparse", you might simply run a BFS from every node.  Not only is the asymptotic running time O(|E||V|) but it is also really simple.  So I suppose that perhaps your graph is weighted: In such case, there are (asymptotically) faster algorithms than the one you're using, but I am not sure how practical they are.
In any case, if you have access to it, you might want to look up "All Pairs Shortest Paths in Sparse Graphs" in Encyclopedia of Algorithms:
  • asked a question related to Graph Algorithms
Question
5 answers
Is there any method to find out the root node in a directed switching graph whose topology guarantees a spanning tree at any moment?
Relevant answer
Answer
You could just compute the difference of the set of nodes and the set of children.
I attached a little proof-of-concept script (python3, ts=8, noexpandtab) that is not optimized but it would be easy to optimize it.(You can check all the stuff while you walk the graph).
Note about the script:
The node ``n7`` is the root element, the graph is a symmetric binary tree, but it would be possible to use the same algorithm with any number of children.
The only element in a tree that has no parent is the root, therefore all nodes that are children cannot be the root.
Aside: A graph with multiple "root-alike" nodes could be handled the same.
  • asked a question related to Graph Algorithms
Question
7 answers
Studying  the computer and communication networks in terms of Graph structures is one of the fields of current interest. In this direction Graph algorithms are investigated to get deep into the topic of interest.
Relevant answer
Answer
 clustering techniques can also applied network decomposition. 
  • asked a question related to Graph Algorithms
Question
4 answers
I want to compare two different graphs, one is the initial graph and the other it is obtained after performing some perturbations in the first one, that means I need a characteristic vector that can provide me how similar or different this two graphs are.
Relevant answer
Answer
Thanks for your answer. I am adding and removing edges. Each edge has two characteristics one is fixed and the other can change when an edge is removed or added ( activated or deactivated). The second characteristic (variable) define to which subset an edge belongs.
  • asked a question related to Graph Algorithms
Question
4 answers
I need to draw a graph with three sets of data for two groups, but it is not working for me. I know how to do it when i need only one set of data. i will be thankful receiving some advice :-)
Relevant answer
Answer
I would do like this:
1)Windows -> New-> Category plot, and select X wave and Y wave from data series 1
2)Graph -> append to graph -> Category plot, and select only Y wave from data series 2 (X should be the same as data series 1)
I attach the graph I got from your data.
  • asked a question related to Graph Algorithms
Question
3 answers
I have completed a transcript abundance study comparing several genes between two groups normalizing one group to '1'.  I have calculated the standard deviation however since it is a portrayed on a log scale, it appears disproportionate below '1' compared to above '1'.
Other groups have successfully published with this representation however, I am stumped.  Any help would be greatly appreciated.
The attached file is from:
CD24 tracks divergent pluripotent states in mouse and human cells.
Which is also found in the link.
Thank you!
Relevant answer
Answer
  • asked a question related to Graph Algorithms
Question
4 answers
I am working on a probabilistic graph algorithm, where I need to select an edge with some probability. Similar to preferential attachment, I need to select an edge based on degree of its incident vertices, i.e. an edge which is connected to high degree vertex will be preferred.
I am not very good at probability. How to arrive at good value of the weight? Is there any systematic approach to calculate the value of weight ?
Relevant answer
Answer
You assign the weights to edges as follow:
1. Compute the adjacent Matrix of your graph
2. compute the absolute weight W of each edge by summing the degrees of ist extremites
3. Set as Z the sum of all W
4. compute  the relative weight P of each edge  as : P=W/Z where  W is the absolute weight of the edge.
5. Select edge according to their P
This Approach work. Proof:
0<= P<=1
Sum of P=1
The more the edge is involved in the graph, the bigger its selection probability
  • asked a question related to Graph Algorithms
Question
2 answers
A subexponential algorithm for elliptic curves over F2n ?
The main question is whether or not the strategies described in the elliptic curve discrete logarithm can lead to a general subexponential discrete logarithm algorithm for elliptic curves in characteristic 2.
Relevant answer
Answer
For small characteristics field like F2^n there are better algorithms quasi-polynomial and so on, much better than the otherwise sub exponential ones (and they do not translate to the general case). A good survey on the state of the art, see:
  • asked a question related to Graph Algorithms
Question
4 answers
I would like to enumerate All the 1-factors or perfect matchings, of the complete directed graph Kn (the number of vertexes is even ).
please if there is an algorithm or method to enumerate all the perfect matchings?
Relevant answer
Answer
Enumerating matchings in the complete undirected graph is easy: and in fact, in the directed version, almost easier.  Enumerating covers of the complete directed graph by directed cycles is not hard either --- though the answer is not as simple as for matchings.  It becomes a sum over stirling numbers instead of just essentially a binomial coefficient.
  • asked a question related to Graph Algorithms
Question
6 answers
I have a set of error-data and this error increases per inch (distance). But this data is not perfectly linear. I want to draw a number of linear lines on the plotted graph and then check which line has the most data points on it or near to it. How should i do it? The data for this is
inch (distance)=[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14--------25 26 27];
error=[-5.5494 -6.7142 -4.2772 -4.4059 -3.7628 -4.2873 -3.2144 -2.058 -2.1906 -2.3534 -1.5007 -1.9778 -1.4085 0 0.6107 1.2219 2.0251 0.9636 1.98 2.1989 3.052 4.5974 3.4878 4.3893 4.8776 3.4843 2.5935];
The plotted graph is attached herewith.
Thanks in advance
Relevant answer
Answer
Exactly as suggested above, you have to approximate your straight line with a linear Regression. The result is a straight line, which does not automatically pass through a certain max of Points, but minimizes the Deviation from the ideal line.
However your data set may contain outliers. These are data Points , which are not compatible at all with the other Points. Even with a linear Regression outlier may worsen your result. Therefore you Need to plot your dataset, remove eventual outlier and then find the linear Regression.
For example in this sequence of number:( 2, 2.03, 3, 1, 2, 1.8, 1000, 3.1, 1.5), 1000 is a potential outlier
  • asked a question related to Graph Algorithms
Question
2 answers
Graph bipartization by edge deletion: Given an edge weighted undirected graph G = (V,E), remove a minimum weight set of edges to leave a bipartite graph.
Relevant answer
Answer
Thank you so much
  • asked a question related to Graph Algorithms
Question
2 answers
Hi researchers, what is the best method for min cost flow max in term of precision, tims of execution, and memory capacity...
Thank you.  
Relevant answer
Answer
Check out the site
which has a rather nice description of how you do it efficiently.
  • asked a question related to Graph Algorithms
Question
12 answers
Dear all,
I am trying to fit sigmoidal curves to describe the relative growth rate (RGR) according to the plant competition (PC). I am implementing three types of models to describe the RGR: i) a competition-dependent model (IND), ii) a competition-dependent model with species-specific parameters (SP), and iii) competition-dependent model based on the life form (LF). The first one can be considered as the null model, because the all individuals can have the same growth rate, regardless of species. The second model the growth model parameters are species-specific, meaning that each species has its own growth trajectory. And the third model the growth model parameters are life form -specific, meaning that each functional group has its own growth trajectory. I am using non-linear fixed and mixed effect regression with nls and nlme in R to describe these model, but I have problems with the syntax of the models in R and the way of how to choose the best model.
For example,
I have the three parameter sigmoid curve describe by this function sigmoid()
  • sigmoid <- function(x, a, b, c){
  • a/(1+exp(-(x-c)/b))
  • }
in where x is PC, a is the asymptote of the RGR (the maximum RGR), b is the curve shape, and c is the inflection point.
From this initial parameters a=0.3, b=240, c=9, I can extract the three models
Model i by nls)
model.ind <- nls(RGR ~ sigmoid(PC, a, b, c), data = data, start = c(a=0.3, b=240, c=9))
Model ii by nlme)
model.sp <- nls(RGR ~ sigmoid(PC, a, b, c), data = data, fixed = a + b + c ~ 1, random = a +b +c ~ 1|SP, start = c(a=0.3, b=240, c=9))
Model iii by nlme)
model.lf <- nls(RGR ~ sigmoid(PC, a, b, c), data = data, fixed = a + b + c ~ 1, random = a +b +c ~ 1|LF, start = c(a=0.3, b=240, c=9))
The question that I have is: Are the models syntax correct? Is correct use “a +b +c ~ 1|SP” or “a +b +c ~ 1|LF” as random values into the models to describe the goal ii and iii?
What variable allows me to choose the best model between these? AIC, BCI, Log-Likelihood, or RMSR? Why?
How I can perform the model in nlme without random effects?
Thank you very much for your help and time
Relevant answer
Answer
Pinheiro and Bates "Mixed Models in S and S-Plus" discuss this issue very well so if you can get your hands on that book do so. What I would suggest is that you use gnls for your non-random model, then set up your nlme model this way: 
Model1<- nlme( RGR ~ a/(1+exp(-(x-c)/b)),
                            data=data,
                            fixed = list(a~1,b~1,c~1),
                            random = a+b+c~1|Region/Site,
                            method = "ML",
                            start = c(0.3,
                                          240,
                                               9))
This forms your base nlme model - notice that I used maximum likelihood (ML), this is so that you can use anova on your gnls model and your nlme model. What you can then do is see if there are significant differences in model parameter estimates between species, and lifeforms by assigning them to each parameter in term - I suggest using the fixed effects for this rather than random because I don't think Species or Lifeforms is random. You could assign random to species though if there is a good reason for it. Assigning species to say the asymptote of the model would look like this:
Model1<- nlme( RGR ~ a/(1+exp(-(x-c)/b)),
                            data=data,
                            fixed = list(a~SP,b~1,c~1),
                            random = a+b+c~1|Region/Site,
                            method = "ML",
                            start = c( c(value for species1, value for species 2,...) ## typically I use the same starting value from above and then put zeros in - one for each of the species ##
                                          240,
                                               9))
This approach will also allow you to determine if other covariates can be added to the fixed effects in order to improve the model. The advantage of using the formula you used is that there are biological interpretations to each parameter so if species have different asymptotes then you can determine that PC has different impacts on the maximum RGR for each species. 
Here is a link to the book - but it is behind a paywall.
Hope that helps!
  • asked a question related to Graph Algorithms
Question
9 answers
I want to know that are these problems equivalent?
Relevant answer
Answer
 Thanks all of you so much
  • asked a question related to Graph Algorithms
Question
9 answers
I need a tool for draws my thesis model. 
Relevant answer
Answer
You asked for software engineering modelling software and the first 2 answers (autocad) referred to 3D modelling.
I suggest Rational Software Architect.
  • asked a question related to Graph Algorithms
Question
1 answer
I want to:
1) implement a simple interface with QT that can load and display 2d polygons.
2) randomly sample some points inside the polygon.
Relevant answer
Answer
Hi, Dawar,
at first, you should consider if you want to draw image using opengl in QT, or painter in QT. Because painter is easier, i will focus on it.
I hope you know hot to use QT creator (IDE for QT). So you need to create a new project. Then you should put into form (in design mode) Qlabel component. This label solves as a space, where an image will be displayed. 
Inside source code, you create image and put into the label by following commands:
QImage im = QImage(width, height, format); //choose with, height and format
ui->label->setPixmap(QPixmap::fromImage(im.scaled(ui->label->width(), ui->label->height(), Qt::KeepAspectRatio, Qt::SmoothTransformation )));
Now, you need to handle how to create a polygon. You can do it by following line, you can define points as you want. 
QPolygon polygon;
polygon << QPoint(5,30) << QPoint(2, 10) << QPoint(4, 20)<< QPoint(5, 30);
Finally, you set your image as painter and paint polygon into the image. Of course, you need to check if the polygon is inside. After that, yoy display the image by line writen above (by set the image to label).
QPainter painter(&im);
QPainterPath path;
path.addPolygon(polygon);
painter.drawPolygon(poly);
painter.end();
You can play with the polygon, you can fill it, change stroke etc. That is your first question, I hope it helps you. If you want, I can write working qt project and send it to you. The second question is not clear for me, can you please ask more in detail?
  • asked a question related to Graph Algorithms
Question
2 answers
Relevant answer
Answer
@Stefan , GI has 50 years of history, but  my context is a little different, for example, I am trying to sift permutations from sets which are not Groups. I have checked . Please, let me know if you have anything particular to inform.
  • asked a question related to Graph Algorithms
Question
2 answers
I started with topological sorting, but the problem is it needs start and stop points, I need to deal with a giant GTFS file which algorithm can take any arbitrary stop at the beginning, and sort all stops based on their location and time sequence. Please keep in mind that some routes have different trip patterns along the day or week.
Any hint will appreciate,
Thanks
Relevant answer
Answer
Do you have any particular algorithm in mind or any resources that I can read more? 
  • asked a question related to Graph Algorithms
Question
3 answers
Hi everyone.
I have developed a R package named eemR (https://github.com/PMassicotte/eemR and https://cran.r-project.org/web/packages/eemR/index.html) which aims at providing an easy way to manipulate fluorescence matrix.
One of the function in the package is used to extract peak values at different regions in the fluorescence matrix (Coble's peaks for instance). I have noticed that the reported locations of these peaks are not consistent in the literature.
In Coble's 1996 paper, peaks are reported as follow (these are the values I am using in the R package):
Coble, P. G. (1996). Characterization of marine and terrestrial DOM in seawater using excitation-emission matrix spectroscopy. Mar. Chem. 51, 325–346. doi:10.1016/0304-4203(95)00062-3.
Peak B: ex = 275 nm, em = 310 nm
Peak T: ex = 275 nm, em = 340 nm
Peak A: ex = 260 nm, em = 380:460 nm
Peak M: ex = 312 nm, em = 380:420 nm
peak C: ex = 350 nm, em = 420:480 nm
In Coble's 2007 paper, peaks are reported as follow:
Coble, P. G. (2007). Marine optical biogeochemistry: The chemistry of ocean color. Chem. Rev. 107, 402–418. doi:10.1021/cr050350+.
Peak B: ex = 275 nm, em = 305 nm
Peak T: ex = 275 nm, em = 340 nm
Peak A: ex = 260 nm, em = 260/400 - 460 nm
Peak M: ex = 290 nm, em = 310/370 - 410 nm
peak C: ex = 320 nm, em = 420:460 nm
Peak B: ex = 270 nm, em = 306 nm
Peak T: ex = 270 nm, em = 340 nm
Peak A: ex = 260 nm, em = 450 nm
Peak M: ex = 300 nm, em = 390 nm
peak C: ex = 340 nm, em = 440 nm
At first, these differences seem to be minors but I was wondering what were your thoughts about that. Should I review my code to change or adjust peak positions?
Relevant answer
Answer
Hi Philippe,
I agree that it's better to find maximum (or average) in the specific region, not the specific position.
There are some related references which may be helpful.
One is from Coble's new book: Aquatic Organic Matter Fluorescence (2014).
Another is Leenheer's EST paper (2003).
And you can also refer to Chen&Westerhoff&Leenheer's EST paper (2003) .
Best Regards
Penghui
  • asked a question related to Graph Algorithms
Question
5 answers
A minimal spanning path in a graph is a path that contains all the vertices of a graph whose weight is the least among the spanning paths.
Relevant answer
Answer
This is a question much related to finding a Hamiltonian path - where a Hamiltonian cycle would be a feasible solution to the famous traveling salesperson problem. The current standing on the Hamiltonian path question is that it is unknown whether it is hard or not - no-one has yet found a polynomial algorithm for the problem.
  • asked a question related to Graph Algorithms
Question
6 answers
I need an algorithm for check if a DFS code is the minimum DFS code for a graph. Thanks
Relevant answer
Answer
Hi,
gSpan Algorithm is based on DFS code concept. I'm sharing with you the implementation details for this algorithm. I hope it will solve your problem. 
Regards,
Waqas
  • asked a question related to Graph Algorithms
Question
10 answers
Hi,
I would like to know the best algorithm for finding dense subgraphs in a large graph?
Is there any linear and/or polynomial algorithm available for this problem?
Thanks
Relevant answer
Answer
This is a great question! It *is* possible to find the densest subgraph in polynomial time by applying network flow algorithms. The related work section of this paper gives some nice references, and they give experimental results: "Fast Algorithms for Pseudoarboricity", http://dx.doi.org/10.1137/1.9781611974317.10
You may also be interested in this paper: "Finding Dense Subgraphs with Size Bounds", http://dx.doi.org/10.1007/978-3-540-95995-3_3
  • asked a question related to Graph Algorithms
Question
5 answers
There should be an algorithm which takes an input as a program and automatically converted into control flow graph and i need to access and modify control flow graph using the algorithm in matlab
Relevant answer
Answer
most likely a pictorial representation or graph representation of matlab script.
  • asked a question related to Graph Algorithms
Question
21 answers
I only know that it is 0-connected graph and every disconnected graph is 0-connected. So, I was wondering whether the graph K1 is connected or disconnected?
Relevant answer
Answer
 Saikat Bisai : sorry, but v on its own from K1 is NOT a pair of nodes. The correct answer is that there are NO such pairs existing, thus the set of these pairs is an empty set. And as any statement is true on each element of an empty set, therfore each pair of these nodes has a path between them.
I think this should be quite obvious.
  • asked a question related to Graph Algorithms
Question
11 answers
I am trying to create algorithm to calculate shortest path in dynamic graph. Dynamic means edge of graph can be removed or added anytime. Value of edge can also be changed. 
Relevant answer
Read the following literature 
Zavlanos, M.M.; Pappas, G.J., "Controlling Connectivity of Dynamic Graphs," in Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC '05. 44th IEEE Conference on , vol., no., pp.6388-6393, 12-15 Dec. 2005
doi: 10.1109/CDC.2005.1583186
  • asked a question related to Graph Algorithms
Question
3 answers
For a given (unweighted) graph G(V,E) with an integer d > 1. How to find a connected subgraph H of G with maximum |V(H)| and satisfying ∆(H) ≤ d (i.e. subgraph H is connected with the maximum number of nodes in which degree of node in H is bounded by d).
Cound anyone please suggest an approximation algorithm to find the subgraph H with satifying above mentioned constraints?
Relevant answer
Answer
Dear Ma'am ,
It would be easier and time saving for me if you could provide me some useful references where they have used the concept of Algebra to solve the above mentioned problem.
  • asked a question related to Graph Algorithms
Question
12 answers
Suppose there is a connected and undirected graph $G$ with n(n>=4) vertices. Let f(G') be the number of connected components of a graph $G'$. Then $f(G)=1$.
Now under the condition that all  vertices of $G$ have at least 3 adjacent vertices (no loop),
can we separate $G$ into two edge-disjoint spanning subgraph $G1$ and $G2$ satisfying that f(G1)+f(G2)<=[2n/3] ?
I mean "separate"  the  edges into to parts, and "separate" the degree of vertices also.
That is to say, if a subgraph   contain the edge $e$, then I consider it contain the vertices of $e$.
I think the answer should be yes and the bound of  f(G1)+f(G2) may be lower but I have no idea how to prove it. Mathematical Induction seems to be no help. Can anyone help me? Thank you very much!
Relevant answer
Answer
It is not hard to prove that every simple graph G (with no multiple edges), which is not necessarily connected, with minimum degree >= 3, can be decomposed to G1 and G2 such that f(G1)+f(G2) <= n/2. This bound is sharp since it is attained on 3-regular graphs.
Suppose G is a counterexample to this statement with the fewest edges. Then G is connected. Denote by D=D(G) the maximum degree of G.
If G has two adjacent vertices of degree >=4, then removing edge between them yeilds a smaller counterexample, which contradicts to the minimality of G. So we can assume that any two vertices of degree >=4 are non-adjacent.
If D<=4, then for every subset X of vertices in G the subgraph H induced by X has at most 2|X|-2 edges. Indeed, it is trivial if D(H)<=3, and if H has a 4-vertex, then all neighbours of this vertex have degree <=3, so the sum of degrees of all vertices of H is at most 4|X|-4, which implies that the number of edges is at most 2|X|-2. Therefore, by Nash-Williams Theorem, the set of edges of G can be partitioned into two forests F1 and F2. Since G has at least 3n/2 edges and a forest with m edges consists of precisely n-m connected components we have f(F1)+f(F2) <= 2n - 3n/2 = n/2.
Finally, assume that D>=5. Let V be a vertex of degree D in G. Since all neighbours of V have degree 3, we can find two of them, say A and B, that are not joined by an edge. Insert an edge AB and remove edges VA and VB from G, i.e. set G' = G-{VA,VB}U{AB}. Note that G' is a simple graph with minimum degree 3 having fewer edges than G. Thus, G' has a desired 2-partition (= 2-colouring) of its edges. W.l.o.g. assume that AB is coloured 1. We take the colouring of G'-{AB} and we colour VA and VB by 1. In the resulting edge-colouring of G the set of connected components of colour 2 is the same as in G' and the number of components of colour 1 is not greater than in G'. Indeed, the number of components of colour 1 in G'-{AB} can be only by 1 greater than in G' and only if A and B are in different components of G'-{AB}. However, in G = (G'-{AB})U{VA,VB} the vertices A and B are joined by the path (A,V,B) of colour 1, so they are in the same component of colour 1. This implies that if we denote the subgraphs of G induced by colours 1 and 2 by G1 and G2 respectively, then f(G1)+f(G2) <= n/2, as desired.
I believe that for graphs with minimum degree >=4 the bound can be improved to 2n/5, and I have an idea how to prove it.