Science topic

# Complexity Theory - Science topic

Explore the latest questions and answers in Complexity Theory, and find Complexity Theory experts.
Questions related to Complexity Theory
• asked a question related to Complexity Theory
Question
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.
You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045
I am grateful if anyone can give me informative suggestions.
Dear Majid Zohrehbandian,
I suggest you see the links on the topic.
Best regards
• asked a question related to Complexity Theory
Question
We see at least two very dangerous features in post-covid China:
1) As we show in the attachments JEE 2020 and ICC 2020, Governments should be very carefull in trying to control prices or fix maximum/minimum thresholds. The price dynamics in complex economies condense a lot of scattered information, are an emergent property, and -under certain circumstances- aid at correcting disequilibria (see the papers attached). Becuase of cumulative wrong centralized decisions, disequilibria are multiplying in the Chinese economy (industrial, construction, energy sectors) and authorities are not allowing prices to show up as correcting re-adjustment signals. This is really dangerous as we show in papers ICC 2020 and JEvEcs 2020 attached (by Almudi et al.).
2) Secondly, as we show in Metroeconomica 2020 (also attached), in a context of increasing prices (shortages of energy and post-covid bottlenecks in global value chains), with high stocks of private debt, and everything developing within an otherwise innovative economy with low (but increasing) interest rates, the probable slight increase in inflation rates expected for the upcoming months will unchain a domino effect, with emergent "big rips" in the socio-economic Chinese system.
1) and 2) may announce a long (a decade) stagnation in the China economy. It seems that the European Union is perhaps the latest worldwide agent in noticing this. China is no longer a clear option. Still, Is China too big to fail?
• asked a question related to Complexity Theory
Question
Business management (and operations) has many intertwined aspects, which constantly interact with each other, raising the complexity of it, as a 'system'. Modelling a complex system is difficult due to dependencies and adaptive behaviour. However, such complex 'systems' self-re-organise and become sustainable. A close-related concept of chaos indicates that a change in the initial conditions can bring out randomness, even with deterministic laws. Though the chaos and complexity theories are interrelated and multi-disciplinary in nature, very less application found in business management research.
The onset of Covid-19 pandemic has presented a unique social context for chaos and complexity.
Fellow scholars of this RG are requested to highlight:
a) recent trends in research in this area (how chaos is measured, analysed?).
b) recent applications of chaos and complexity theories in the field of business management
c) modelling techniques, related to chaos and complexity theories
This answer focusses only on one aspect of your question, which will help to narrow your search domain: entropy measures. Within complex systems are being developed and increasingly applied complex systems measures based on the notion of entropy.
Entropy measures were originally developed by Ludwig Boltzmann and Shannon in twodisciplines: statistical statistical physics and information theory.
In the last about fourth years, a whole array of complexity measures were developed that measure the observed system, make some distribution of measurable properties that are subsequently inserted into the well know Boltzmann or Shannon entropy measures.
This procedure helps to asses the state of the observed system even without being capable to observe its every single detail. How exactly is this done can be found in our paper on the prediction of Torsades de Pointes arrhythmia, all details of the procedure are explained there.
A very important note. Very probably your measures will be impossible to classify by humans. This leads to the application of AI and machine learning methods, which are described in the same paper.
The notion of entropy had been proven a very valuable way of classifying of the operational mode of systems. Definitely something that you would like to think about in your case.
• asked a question related to Complexity Theory
Question
During my PhD studies I was wondering at what point a system becomes complex or even chaotic. I conducted a laboratory study under defined boundary conditions. I conducted my research on fairly homogeneous rock samples. So I could predict pretty well how the samples will behave. Then I read a publication about fractals in geomechanics, the complexity of systems and chaotic behaviour. Obviously complex or chaotic behaviour increases with the number of variables and uncontrollable factors.
But what is the point where a system becomes complex? Is it a matter of the size? Is it a matter of the number of variables? Is it a matter of the view point? Is there any quantification when a system becomes complex or even chaotic?
Examples are worth of thousands words.
1) Logistic map
X(n+1) = a * X(n) * (1 - X(n))
is expressing chaotic behavior and period doubling. Is this map simple enough?
2) John Conway's 'Game of Life' (google this)
It is simple cellular automaton that creates emergent structures. Gliders are one such emergent. They can live on a chessboard. Is this simple enough?
From this and many other biological examples we can see that the world is full of simple systems expressing chaos and complexity. We are just unable to observe them directly due to to our insufficient instruments.
• asked a question related to Complexity Theory
Question
Can we apply the theoretical computer science for proofs of theorems in Math?
• asked a question related to Complexity Theory
Question
As we know, the Cauchy integral formula in the theory of complex functions holds for a complex function which is analytic on a simply connected domain and continuous on its boundary. This formula appears in many textbooks on complex functions.
My question is: where can I find a generalization of the Cauchy integral formula for a complex function which is analytic on a multiply connected domain and continuous on its boundary?
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function.
• asked a question related to Complexity Theory
Question
Can the complexity theory solve complete or partially problems in Math?
I read your notes, but I got nothing! You described some open problems. Complexity theory is useful in the presence of an algorithm to tackle the problem.
First, you need to show a concrete theory and then build your algorithm with a suitable complexity time to support your proofs.
We have nothing to do with complexity in the absence of the theory.
Best regards
• asked a question related to Complexity Theory
Question
Hi,
I am currently doing a study about using and learning English in multilingual cities (e.g. Sydney, Australia; Auckland, NZ). I am particular interested in how the big L1 community and frequent exposure to the L1 using environment could influence people's English language development when studying and living in a multilingual city.
Is there an existing theory of framework about this topic or learning and using English in multilingual society?
A book that might help you with some of these issues is Rose, H & Galloway, N. (2019) Global Englishes for Language Teaching. Cambridge University Press.
• asked a question related to Complexity Theory
Question
I am preparing to write an article about the development of the basic assumptions or way of thinking toward organisations and people that underpin Strategic Management.
About the development of Strategic Management, I read some articles and books relating to system theory, and complexity theory. Can anybody recommend some book or article that includes systematic descriptions about the development of Strategic Management and the characteristics of each phase. Thank you very much.
A more general book summarising underlying management concepts and paradigms may be a good starter, too:
Morgen Witzel: A History of Management Thought, 2nd ed. 2016.
• asked a question related to Complexity Theory
Question
A) As evolutionary-NeoSchumpeterians (or complexity-oriented) economists, we conceive the economy as a dynamic system in which scattered heterogeneous and boundedly-rational agents interact. Local and global interactions involving feedbacks and domain-specific connections involve producing, investing, consuming, distributing incomes, trading in general, learning, innovating, entry/exit, etc. And the ongoing development of the specifc dynamics we propose to explore a problem generate "EMERGENT PROPERTIES".
B) The methodologies we use range from verbal logical arguments (which of course can be genuinely complex) to complex ABMs, passing through non-linear highly stylized models, replicator dynamics and evolving complex networks with the afore-mentioned components.
C)The specific methodology used is not innocous. Thus, whereas verbal arguments involving real complexity are often almost inestricable, ABMs are a bit more enlightening (the less so the higher the scale), and, in my opinion, the subset of low-scale ABMs, enriched-replicator dynamics, networks and non-linear styled complex models are the best. They often even allow for closed-form quasi-exhaustive analysis.
D) The problem is how should we pass from the results we obtain in our theory, to the posing of policy recommendations to be implemented within a reality which we perceive as emerging from a complex system?
Notice that there are two sources of complexity (2 complex realms involved):
1)The inherent complexity of the real system under scrutiny.
2)The often black-boxed complexity of the theory we propose.
We know that even small differences between two evolving complex systems can make a huge difference in their outcomes. If we assume (as we should) that we can never access the "real complex mechanism underlying reality" (just we should aspire to approach it, at least in social sciences), we should be very prudent in our policy prescriptions.
E) The solution prescribed by those using simple models (mainstream economic models or simple statistical models) is not valid, since they begin by assuming that reality is SIMPLE (instead of complex), and they falsely avoid the problem. Why should social reality be simple in its functioning? The historical record of crisis and social distorsions, and the analogies with natural systems point out to a clear failure of the standard approach. Thus, if we accept complexity:
How do you address the issue of double complextity 1) and 2)?
Surely this is an extremely interesting and important theme, but I have no enough time today to discuss the points in detail. The following are just my impressions I felt when reading Francisco Fatas-Villafranca and Isabel Almudi 's commnets:
(1) It may not very wise to consider at the same time policy advisers' misconducts and epistemological questions of complex systems and complexity. I understand that you have actual problems on this points, but I feels that epistemology must precede good policy recommendation question.
(2) Fransisco thinks of two realms. I wonder if this is a good framework when we examine complex systems and behaviors in a complex system. I have been thinking that we should better distinguish three "realms" (?) of complexity:
(a) complexity of an objective system (e.g. national or world economy)
(b) behavior of humans in a complex situation (.e.g. how we behave in a complex world.What does the complexity means for us, human beings? )
(c) theory or models of complex systems (e.g. economics and economic of complexity in particular)
(3) Simple rules or behaviors may engender complex processes.
See for example Ronald A. Heiner (1983) The Origin of Predictable Behavior. The American Economic Review 73(4): 560-595.
(4) My general impression vis-à-vis Santa Fe Institute is that they lack a good theory of human or animal behaviors. Fransisco and Isabel may be right when they claim that they are considering too simply the correspondence between real economy and their models. ABM or ABS must be used mainly to understand the working of complex systems, but we should refrain at least at the actual stage of our discipline from using them as means of predictions. Direct application of their simulations is extremely misleading when we do not understand the real process and characteristics of the real system. ABS is a necessary and hopeful mean of economics research but we should also keep in mind that it is still in a embryonic prematured stage.
See my paper: A Guided Tour of the Backside of Agent-Based Simulation.
• asked a question related to Complexity Theory
Question
Synchronization and memory costs are becoming humongous bottlenecks in today's architectures. However, algorithm complexities assume these operations as constant, which are done in O(1) time. What are your opinions in this regard? Are these good assumptions in today's world? Which algorithm complexity models assume higher costs for synchronization and memory operations?
Just adding to what was previously said. In specific purpose computers (embedded systems) it is posible to have hardware-based operations that are not necessarily O(1) (or any other complexity order for that matter) like their general purpose counterparts. Such is the case in certain devices designed for cryptology applications.
• asked a question related to Complexity Theory
Question
Is there any quantum entanglement based solution to simulate the dynamics of classically interacting three bodies?
@ Dr. Karmakar
….. however due to your counter question I have realized that any classical problem which are not tractable with classical algorithm also doesn’t have any quantum solution. Thank you.
• asked a question related to Complexity Theory
Question
Stepwise multiple regression is used to assess the extent to which a dependent variable can be predicted by the combination of variables. It computes and identifies which variables contribute to explaining and predicting the dependent variable. It generates the R, R Square values and so on to indicate how much variation the combination of variables or an outstanding variable account for. Can someone who holds Chaos or complexity theory to view how a foreign language is learnt use stepwise multiple regression? Does that make sense? For example, I am aware that there are too many factors, such as the learners' cultural backgrounds and reasons for learning, which influence second language learners’ attitudes towards their teachers’ instruction. These factors and perhaps with other unknown factors interact with one another in complex systems, and the interactions and their results are unpredictable. The purpose of using statistic methods is to understand part of the story between some factors. Besides, stepwise multiple regression does not intend to measure a factor by isolating it from others. Does this make sense?
Or, at least, he/she can use Chaos or complexity theory to discuss the findings?
I would be very sceptical about this, because complexity perspectives and multiple regression seem to be underpinned by very different epistemological beliefs. Multiple regression is underpinned by a linear understanding of causality. Complexity perspectives challenge this view (in fact, some people use Complexity theory interchangeably with 'non-linear science'. Multiple regression attempts to predict, but complexity holds that prediction is impossible. Stepwise multiple regression is all about removing variables, one at a time, in order to find the ones that are most likely to predict outcomes. Complexity theory argues that every variable, no matter how small, can have a disproportionate and unpredictable, and that it is therefore more productive to view the system as a whole.
• asked a question related to Complexity Theory
Question
So far, min-max optimization has several methods to prove its complexity. Do you have any suggestions to prove min inside min optimization complexity?
Thank you. I think my problem is close to Traveling Salesman Problem method.
• asked a question related to Complexity Theory
Question
Let say we have implemented an algorithm and wrote down the execution time of that algorithm while changing the input value. How can we conclude the cost function of that algorithm ?
The cost function is an input not an output. By monitoring how the execution time varies with some property that characterizes the complexity of the input data, it's possible to learn about the complexity of the program that realizes the algorithm.
• asked a question related to Complexity Theory
Question
Hello,
This project aims are to address the theory of dynamic systems from the pedagogy point of view or these intend to study the possibilities for the reformulations, for the re-conceptualizations ... of pedagogy from the perspective of the theory of complex systems?
In any cases, I thing that this project is very interesting and useful too for the knowledge society.
Sincerely,
Bogdan Nicolescu
Thank you, Bogdan!
I am very happy when I have found even one person from the world who understand symstems theory and systems thinking and even the fact that for understanding pedagogy and education - so many intensions and extensions at the same time - sustemic views are needed. Otherwise only a small details ase seen - and it means narrow way of thinking.
sincerely yours,
professor Ulla Härkönen
Finland
• asked a question related to Complexity Theory
Question
It seems that the paradigm of the Social Determinants of Health is no longer enough to explain health - the dynamics of the disease. Is it time to propose new and better models of explanation?
The way the initial question is posed makes me wonder if you thought that social determinants of health are supposed to explain all vulnerability to disease or all the factors needed for being healthy. The first three responses to the question all indicate that developing disease or staying healthy has multiple types of determinants. And, indeed, these can interact. Take a very simple disease such as influenza which right now is occurring in many parts of the world. The immediate cause of the disease is infection by the influenza virus. The virus potentially can infect anyone who does not have sufficient antibody to the particular strain of the virus that is "in circulation." But, the likelihood of exposure to the virus depends on the likelihood of contact with someone who is infectious. That, in turn, can be affected by the local population density - such as the number of persons who share a household. The severity of the disease can be affected by other factors such as poor nutritional status. So, even in this simple example there are multiple types of determinants of health and disease and severity, or impact, of disease.
It is also worth remembering that disease and health are not just physical states but also emotional states. There are factors that affect mental health. Furthermore, mental health and physical health can interact.
All this is well-known; and yet, I do not believe that anyone can state that we know exhaustively all the factors or possibly even the types of factors that can affect health and disease.
• asked a question related to Complexity Theory
Question
Food is multidimensional. In order to understand what is food and what does it mean only a holistic approach seems suitable, which means putting together knowledges and methodologies from disciplines like history, economy, sociology, anthropology, psychology, agronomy, nutrition, ecology and so on. But this also means be able to deal with all these approaches at the same time. So, is there any academic research that tries to link complexity and systems theory with food research ?
Thank you Maryann ! You got the point. I will have a look at the Thayer's book for sure. Thank you very much. And thanks very much to you Rita.
Best regard
• asked a question related to Complexity Theory
Question
I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:
-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')
-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')
where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).
But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?
Dear Fatemeh,
Maybe I am getting too late into this discussion. What I would do is to solve the following problem P:
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
a <= x_i <= b for each vector variable component
and programming myself a simple branch and bound algorithm:
1. Solve P
2. check whether some variable x_i has a value that is different from a or b.
3. (branching) Say you find that x_t is a < x_t < b. Then solve the following two problems:
P1
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = a
P2
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = b
4. Perform step 3 on the solutions of all problems you have.
5. (bounding) . Stop branching if:
- the solution of a problem is infeasible
- all variables in the solution have values either a or b ("integer" solution)
- the solution of the problem still contains variables with values different to a or b, but the objective is worse than a previously found "integer" solution.
I know it is brut force, but it will keep the structure of your problem and will guarantee what you want. And, it is very easy to program.
Hope it helps.
• asked a question related to Complexity Theory
Question
The Complexity Theory was developed in the 1970s (that is, almost 50 years ago) of the last century with the goal of classifying algorithms according to the degree of difficulty in their execution on computers. The degree of difficulty is understood as the number of elementary operations (NEO)  (addition, subtraction, multiplication, division, exponentiation, and so on), which must be used when searching for the exact (optimal) solution of a given combinatorial model. Moreover, it should be emphasized that this NEO is evaluated for the worst case of the initial data. That is, this NEO is the upper limit of the complexity of this model, which is guaranteed to solve the problem within this time limit. Classes "P" and "NP" were introduced. The class "P" marks all combinatorial algorithms for which the NEO is estimated by some polynomial from the parameter "n", for example O (n ^ 3), where n is the total number of different initial data in the problem. The class "NP" marks all combinatorial algorithms for which the NEO is estimated by some exponential function a ^ n, for example 2 ^ n. It is believed that theoretically, algorithms from class P are good algorithms, but algorithms from the NP class are bad algorithms. However, this theory has not been developing for decades, we can say that it has practically stopped in its development, and we hardly notice anything new. From the point of view of practice, there is a point of view: a good algorithm from class P is not always evaluated as an effective algorithm, since there are restrictions imposed on the time limit for execution. For example, an algorithm with complexity O (n ^ m) becomes impractical even at values ​​m>10, and also an algorithm with complexity O (n ^ 6) becomes impractical even at large values ​​of n>1000. In order to solve this problem within a given time limit, various approximate algorithms are proposed, which can be divided into two categories. The first category is ε-algorithms that can solve the problem with a given ε-accuracy. Such algorithms produce approximate solutions A (D) = OPT (D) * (1 + ε), where 0 <ε <= 1,   A(D) is an approximate solution that produces an approximate algorithm "A" for the initial data D,   OPT(D) is the exact (optimal) solution. For such algorithms, the complexity of the execution can be expressed, for example, as O ((n ^ 3) / ε) for a given accuracy ε.The second category is heuristic algorithms, for which the result is unpredictable in advance. Among this category there are remarkable theoretical results of the form A(D) <= a * OPT(D) + b, where "a" and "b" are real constants, a>= 1, b>= 0. Here, the expression A (D) <= a * OPT(D) + b is valid for all kinds of input data D and is in fact the worst case for all possible cases of D. It should also be noted that the number of ε-algorithms is very small. Even fewer known heuristic algorithms with theoretical results are A(D) <= a * OPT(D) + b. Thus, in practice we are dealing with heuristic algorithms with an unpredictable result of the approximate solution A(D). We introduce the metric q = 100% * (A(D) - OPT(D)) / OPT(D) as the measure of closeness of the approximate solution A(D) to the optimal solution OPT(D). Without loss of generality, we can say that q = 100% * (a - 1). In particular, if q = 0, then the solution A(D) = OPT (D). Let's imagine that some heuristic algorithm "A" produces a sequence of solutions A(D, t_0), A(D, t_1), A(D, t_2), ... A(D, t_k) at instants t_0, t_1, t_2 , ... t_k, where A(D, t_0)> A(D, t_1)> A(D, t_2)> ...> A(D, t_k), t_0 <t_1 <t_2 <... <t_k <= T within specified time limit. Here A(D, t_0) represents the initial solution at time t_0. The question arises: how much the solution A(D) = A(D, t_k) is close to the optimal OPT(D), which is unknown? Sometimes there are cases when the initial solution A(D, t_0) is the optimal solution, but the heuristic algorithm does not know anything about it and the process of searching for new solutions continues until the time limit has elapsed. That is, we see a situation when the heuristic algorithm works "in a blind mode". It is important to understand one thing here: finding a solution is not an end in itself, although it is also important to find a good solution. More importantly, it is right to understand the process of finding a solution in order to stop the search in a timely manner at a time when further improvement is impossible. And here we are faced with the problem of estimating OPT (D) "from below" as the Lower Boundary LB(D) <= OPT(D). Let's imagine that some exact algorithm LB generates a sequence of lower bounds LB(D, t_0), LB(D, t'_1), LB(D, t'_2), ... LB(D, t'_k ') at the moments time t'_0, t'_1, t'_2, ... t'_k ', where LB(D, t'_0) <LB(D, t'_1) <LB(D, t'_2) <... < LB(D, t'_k '), t'_0 <t'_1 <t '_2 <... <t'_k' <= T ', T' is the specified time limit. Here LB (D, t'_0) represents the initial lower bound at time t'_0.
As an Exact Algorithm, LB can be considered, for example, the widely known method of "branch & bound", which works very well in practice. As another exact LB algorithm, you can use the Linear Relaxation technology and other special methods to find the lower bounds. Here the general method can consist in reducing the original model to another one and with other initial data D ', for which the condition OPT(D') <OPT(D) is proved, after which the lower bound LB(D') <= OPT(D') will be found. We denote the best lower bound LB(D) = LB(D, t'_k ') will be found and define the metric p = 100% * (A (D) - LB (D)) / LB(D). Obviously, p>= q, since LB (D) <= OPT(D). Thus, even if you do not know the optimal (exact) solution OPT(D) we can estimate the approximate solution A (D) within the time limit T + T '. We can draw two imaginary curves in a Cartesian coordinate system. The first curve LB (D, t) will be increasing and pass through the points {LB (D, t'_0), t'_0)}, {LB (D, t'_1), t'_1}, {LB (D, t'_2), t'_2,}, ... {LB (D, t'_k '), t'_k}. The second curve A (D, t) will be decreasing and pass through the points {A (D, t_k), t_k}, ... {A (D, t_2), t_2}, {A (D, t_1), t_1}, {A (D, t_0), t_0}. The abscissas of the curves LB (D, t) and A (D, t) will form a sequence  LB (D, t'_0) <LB (D, t'_1) <LB (D, t'_2) <... <LB (D, t'_k ') <= OPT (D) <= A ( D, t_k) <... A (D, t_2) <A (D, t_1) <A (D, t_0). Now let's formulate the problem: Suppose a Time Limit T is given. We need to find such points {LB (D, t'_x '), t'_x'} and {A (D, t_x), t_x} on the curves LB (D, t) and A (D, t ) to t_x + t'_x <= T, in order to the value p '= 100% * (A (D, t_x) - LB (D, t'_x')) / LB (D, t'_x ') would be minimal. In other words, for a given Time Limit T we must:
1. Find the approximate solution A (D, t_x) within the time limit t_x
2. Find the lower bound LB (D, t'_x ') within the time limit t'_x'
These two quantities will form the quality of the solution p'. By changing the time limit T we can control the quality of the solution: the more T is, the less will be p' (that is, the quality of the solution will be better). If we will use the entire Time Limit to find only an approximate solution, then this will not change anything as it is done now. That is, we still do not know anything about the quality of the solution. If it turns out that A(D, t_x) = LB(D, t'_x '), it means that we found the optimal solution, about which the heuristic algorithm did not know anything before. We introduce the class E (Effecive Algorithms) for algorithms for which problems can be solved in the way described above. That is, it is necessary to develop not one algorithm as in Complexity Theory, but two algorithms:
1. Heuristic Algorithm for finding for an Approximate Solution
2. Exact Algorithm for finding a Lower Bound
In class E, all algorithms have the same metric p', which is calculated depending on a given Time Limit T. By default, one can specify t_x = 0.5 * T and t'_x '= 0.5 * T. Class E is much more closer to practice and gives a clearer understanding of the algorithm's quality criterion. In this class, the difference between classes P and NP disappears: all algorithms of this class have only one criterion: the minimum value of p' within a given Time Limit. And if this is ensured, then the algorithm with small values ​​of p' is effective, otherwise it is ineffective regardless of the value of the approximate solution found. In a word, the decisive role here is not so much the search for only a good approximate solution (although this is of no small importance) but rather a search for an integrated solution "Approximate Solution & Lower Bound", from which we can judge about quality of Approximate Solution. So, we can summarize with one short expression: class E vs class P & NP.
The final question is: should we review the traditional Complexity Theory and consider instead the Efficiency Theory?
In my opinion, the main problem facing today's Complexity Theory is the problem of Scalability. What was quite acceptable yesterday was now unacceptable. It is with the problem of scalability were faced, for example, the crypto currencies of Bitcoin, Epherium and others. The time has come when, with a large number of incoming transactions per unit of time, the systems stop working reliably. This fact led to the so-called problem of Segwit in Bitcoin https://en.wikipedia.org/wiki/SegWit, when it was necessary to reprogram the program for the miners. This led to the division of Bitcoin into two crypto-currencies. The program for crypto currency is the same combinatorial problem. Thus, what was acceptable earlier for large n becomes unacceptable today for very large n. You shared your reference about the algorithm of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. Here is a phrase from https://en.wikipedia.org/wiki/AKS_primality_тест "In 2005, Carl Pomerance and HW Lenstra, Jr. demonstrated a variant of AKS that runs O ({log (n)} ^ 6) ..." However, for very large n, for example, for n = 2 ^ m, where m - too very large number, the complexity of the AKS-algorithm will be O (m ^ 6), that is, the algorithm becomes unacceptable in practice for very large m in spite of the fact that the complexity O ({log (n)} ^ 6) from the class P . Since the problem of "Is a number prime or composite?" has only two possible answers YES or NO, it can not be solved approximately. That is, I conclude that this problem for very large n is not solvable.
Thus, the problem of scalability leads to a significant need to solve the problem approximately within a given time limit, which is acceptable for practice. However, it should be noted that not all combinatorial problems are scalable, that is, not all problems allow us to consider another scalable problem with new initial data D', instead of an original problem with the input data D, for which would be OPT(D) <= OPT(D). For example, the problem "Find the sum of n numbers OPT(D) = x_1 + x_2 + x_3 ... + x_n" is scalable if the numbers are ordered in descending order, since we can consider a new problem with the input data D '= {n_1 ° k_1, n_2 ° k_2, n_3 ° k_3, ... n_m ° k_m}, where k_1 + k_2 + ... k_m = n, D 'is a multiset,  in which x_i <= n_1 for 1 <= i <= k_1; x_i <= n_2 for k_1 + 1 <= i <= k_1 + k_2; x_i <= n_3 for k_1 + k_2 + 1 <= i <= k_1 + k_2 + k + 3; and so on. The original problem "x_1 + x_2 + ... + x_n" is solved for O (n) operations, however the problem "A (D) = n_1 * k_1 + n_2 * k_2 + n_3 * k_3 + ... n_m * k_m" is solved for O (m) operations. We can also consider the multiset D "= {n'_1 ° k_1, n'_2 ° k_2, n'_3 ° k_3, ... n'_m ° k_m}, where k_1 + k_2 + ... k_m = n in which x_i> = n'_1 for 1 <= i <= k_1; x_i> = n'_2 for k_1 + 1 <= i <= k_1 + k_2; x_i <= n'_3 for k_1 + k_2 + 1 <= i < = k_1 + k_2 + k + 3, and so on. Then we find the sum "LB(D) = n'_1 * k_1 + n'_2 * k_2 + n'_3 * k_3 + ... n'_m * k_m for O(m) of operations. We get LB(D) <= OPT(D) <=A(D). Thus, we can scale the original problem with the goal to find an approximate solution A (D) and then evaluate this solution as a measure of closeness to the original solution OPT(D) by the metric
p '= 100% * (A (D) - LB (D)) / LB (D). General complexity of solving this problem will be O(m). The higher m, the higher will be the quality of the approximate solution. We can manage this quality at our discretion.
I tried to find a more understandable natural language for interpreting the approach of evaluating approximate solutions in a discussion https://www.researchgate.net/post/Who_will_achieve_success_in_life_more_a_clever_which_does_not_know_that_he_is_clever_or_a_fool_which_knows_that_he_is_fool
However, I feel that this interpretation is still far from perfect.
• asked a question related to Complexity Theory
Question
I’m currently involved in a research project that is related to Highly Integrative Questions (HIQ’s).
To define the landscape of those "next level client questions" we initiated a research:
How to define HIQ’s?
How to approach HIQ's?
What are cases that relate to HIQ’s?
How can we learn from those cases?
What kind of guidance and facilitation are needed in the process?
Some buzzwords: Complexity Theory, Integrative Thinking, Social Innovations
@everyone thanks for the answers. This is exactly where our research is longing for.
Understanding Highly integrative Questions and complex system as prerequisite for innovating new services and products! I'm facinated by the Cynefin framework, because it shows different levels on how to approach innovations with systematic lenses...
@Brian I'm not really into deductive and inductive reasoning.
Integrative thinking in simple words is making an synthesis of A and B. It requires an ability of tension thinking for the synthesis. A good example can be found at the emergence of the Linux compay “Red Hat” and the thinking of their CEO Bob Young. It is also worth it to look at the work from Roger Martin.
• asked a question related to Complexity Theory
Question
I am able to reproduce Pyragas' results from his 1992 paper for the Rössler system operating in the spiral regime (e.g. a=0.2, b=0.2 and c=5.7). However, it has been much harder to find situations where an UPO of that system operating in the funnel regime (e.g. a=0.28, b=0.1 and c=8.5) is stabilized using Pyragas' delay control law. I am able to stabilize period-one UPOs but have not found  UPOs with longer periods yet. Has anyone information (references) on this problem?
I have just posted scripts that implement Pyragas' methods. In case you are interested, you can find them at: https://www.researchgate.net/project/Scripts-on-Nonlinear-Dynamics
• asked a question related to Complexity Theory
Question
Looking through the literature, I realized all the proofs for NP- hardness of QIP are based on the claim that Binary Quadratic Integer Programming is NP- hard. Is that true?
Unconstrained Quadratic Integer Programming is strongly NP-hard even when the objective function is convex. The proof is by reduction from the Closest Vector Problem.
• asked a question related to Complexity Theory
Question
Can Fractal Theory Explains the Urban Fabric ? How can be the applications in Traditional Muslim Settlements ?
The box-counting method is too mechanistic. Instead, I would suggest some organic or natural ways of determining fractals; see the related papers on the new definition of fractal:
• asked a question related to Complexity Theory
Question
Can Chaos Theory Explains the Urban Fabric? How can be the applications in Traditional Muslim Settlements?
A very interesting question first!
Chaos Theory or complexity science in general might be able to explains the complexity of urban fabric or traditional Muslim settlements, but it is limited to explanation or understanding, and it hardly contributes to making or re-making. However, Christopher Alexander's complexity science is, from the very beginning, for making or re-making the complexity of urban fabric. This making or re-making makes Alexander's work unique:
Built on the legacy of Alexander and his life's work pursuing beauty, a new master program on architecture http://buildingbeauty.net/ has been established. The beauty is defined mathematically, exists in space and matter physically, and reflects in our minds and cognition psychologically. The
program aims for wholeness-oriented design, beginning with construction rather than paper based design as most architecture schools do. It intends to create buildings or cities with a high degree of wholeness, instead of slick buildings as most modernist architects do.
• asked a question related to Complexity Theory
Question
The structure of this problem is similar (not equal) to other problems that admits simple solutions. Maybe, the colleagues of this community could help me in identifying a solution to this problem.
Dear Sir
1. An Introduction to Computational Fluid Dynamics The Finite Volume Method, 2/e By Versteeg
• asked a question related to Complexity Theory
Question
From the look of the demarcation criteria Does the geocentric theory turn out to be non-scientific versus the heliocentric cosmological theory? Is science no longer a vision surpassed by another?
F., you said "not cheating."  But there is some suspicion that Ptolemy, so as to keep following Aristotle, forged the data so that they fit his theory.  I don't know if there are clear evidences, but in any case the legend of the honest scientist is dubious, as much as the one the of bigot religious.  I wonder why Galileo had so much trouble with them, and no other scholar, even not Copernic, at a time when there were even no real proof of the heliocentric system.  Today, his way would be frowned upon even by the more stringent rationalists.
• asked a question related to Complexity Theory
Question
is distributed gradient algorithm could be considered as a game theory method?
Well in my opinion the answer is no... and yes. It depends on the point of view. If you see the game theory as an optimization method where you have to optimize two or more coupled functions together (like many authors do) and you are solving a cooperative game by using multiobjective optimization, and you are using a multiobjective gradient algorithm for this, the answer could me considered as "yes". If you are using the classic gradient algorithm for the optimization of only one function then the answer is "no". You could also argue, your game has only one player, but this case makes no sense from the game theory point of view. Now, strictly speaking, the game theory is a "theory" and the gradient algorithm is a numerical procedure. You can use the second to solve a example of the first one, but at the end, they are two different things.
• asked a question related to Complexity Theory
Question
It is discovered in the abstract intelligence theory [Wang, 2009; Wang et al. 2017] of intelligence science that AI may merely carry out imperative, reflexive, iterative and recursive intelligence. However, more sophisticated human intelligence such as those of cognitive, causal and inductive intelligence will hardly be implemented by the traditional computational power, because none of the advanced forms of aforementioned intelligence are iterative and the sizes of recursions are normally infinitive.
Further information may be found in:
Wang, Y. (2009), On Abstract Intelligence: Toward a Unified Theory of Natural, Artificial, Machinable, and Computational Intelligence, International Journal of Software Science and Computational Intelligence, 1(1), 1-17.
Yingxu Wang, Lotfi A. Zadeh, Bernard Widrow, Newton Howard, Françoise Beaufays, George Baciu, D. Frank Hsu, Guiming Luo, Fumio Mizoguchi, Shushma Patel, Victor Raskin, Shusaku Tsumoto, Wei Wei, and Du Zhang (2017), Abstract Intelligence: Embodying and Enabling Cognitive Systems by Mathematical Engineering, International Journal of Cognitive Informatics and Natural Intelligence, 11(1), 1-15.
What is your definition  of animal?  Is Monkey family included? if so Intelligence should not be used!! The difference therefore shall be cloudy.
• asked a question related to Complexity Theory
Question
How to prove a optimization problem is NP-hard, especially when co-channel interference is considered. I will be greatly grateful that someone could give me an example. It will be better if the example is in non-orthogonal multiple access (NOMA) scenarios.
• asked a question related to Complexity Theory
Question
The experimental results than Leonard Adleman obtained in 1994 while using DNA to compute the direct Hamiltonian path problem with just 7 nodes do not seem encouraging. It took him several days of lab work to complete the experiment. Furthermore, although DNA computation offers a potential in terms of data storage, there is also the issue of molecular volatility.
Can biological computation provide a better alternative to classical computation?
Maybe you have the Quantum Zeno Effect
in mind, which can be exploited to prevent
the decay of a radioactive particle
(a watched pot never boils). But the
states here are classical ones, I don't
think that this effect applies here.
Nucleotide misincorporation could be combated
with special codes, error correcting codes.
But if you use DNA as storage material, you
have an enormous amount of redundancy anyway.
Regards,
Joachim
• asked a question related to Complexity Theory
Question
I understand that chaos theory focuses on deterministic chaos, and evolutionary algorithms is stochastic.  But are there techniques that apply to both?  For instance, when my solution fails to converge are there ways to visualize the system and look for things like attractors that might indicate problems?
Look into stochastic resonance. It may not provide a chaotic (infinite period) solution but it may identify periodicities that tie up a nonlinear algorithm with noise and feedback
• asked a question related to Complexity Theory
Question
Several studies have addressed the theory of complexity in a theoretical way, advancing in the discussions about the boundaries of this theory. However, when analyzing empirically the complexity in organizations, the difficulties are many. I am conducting a survey of the best quantitative methods for empirically studying complexity in organizations. What do you think about it? What empirical research would you indicate to me?
Novatorov, Edouard, Nonprofit Marketing: Theory Triangulation (October 27, 2014). Available at SSRN: https://ssrn.com/abstract=2515224 or http://dx.doi.org/10.2139/ssrn.2515224
• asked a question related to Complexity Theory
Question
In my view the world is full of real systems which represent one side of a coin .The other side of same coin represents complex systems while reflecting real spectra . As we are human beings we can see only real nature of a system reflecting stable real spectra .
Prof B.Rath
@Mirza
I saw your profile in RG . My simple suggestion to you is that my question is related to physics . If you find some time discuss with a Prof of Physics of your institute working in quantum mechanics of eigenvalue problems. Hope you not misunderstand me .
Regards
• asked a question related to Complexity Theory
Question
Curriculum Integration has been one of the most complex theories for me, as an educationalist, to understand. Integrating the curriculum seems to entail multiple aspects of the educational context (e.g., historical, philosophical, economics, etc.) Understanding how these aspects interact to each other is of key importance to achieve a truly integrated curriculum. One of these interactions has caught my attention: the conflicts between curriculum integration and power relationships. I would like to comprehend what are the political aspects of curriculum integration, Especially in those highly hierarchical disciplines such as medical education.
I am not sure that I can help you on the power relationships related to curriculum integration, but attached is a paper that may be of interest concerning curriculum integration of research findings.
• asked a question related to Complexity Theory
Question
In biological systems, how can we say this is chaotic or random behaviour(for example genome behaviour). is  entropy useful
If the dynamics of a process is chaotic then uncertainty on the initial conditions typically grow exponentially, but if we have deterministic chaos the uncertainty can be reduced by measuring the initial state with high precision.
If external noise (randomness) is present the uncertainty cannot be reduced below a certain value by measuring the initial state.
For real world problems chaos and external noise are often present at the same time. Entropy is useful for measuring the uncertainty for both deterministic systems and systems with external randomness.
If the measurement precision is kept at a fixed level one cannot distinguish between deterministic chaos and external noise.
• asked a question related to Complexity Theory
Question
The diagonal elements are non-zero. So inverse of the matrix is easily computed by taking the reciprocals of each elements. Is this the complexity O(n)?
yes, must be because your inverting algorithm is linear --> O(n)
• asked a question related to Complexity Theory
Question
Can anyone refer me to research exploring the application of complexity theory to the criminal justice system? Or the application of complexity theory to a complex social phenomenon i.e. human trafficking?
• asked a question related to Complexity Theory
Question
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?
Can any one suggest me any paper regarding this topic?
That depends on what "more" means. :-) If "more" is a constant (finite) number then it means that you solve a finite number of NP-complete problems - which is still NP-complete.
• asked a question related to Complexity Theory
Question
I have identified some 50 CAS concepts commonly used by authors in the paper.  They range from those derived thru chaos theory and agent-based modeling, to self-organization of agents as they interact and co-adapt, to emergence, etc. I have created one brief story that weaves together clusters of these concepts and then another brief story that weaves together the clusters.
How important is it to have a coherent story as one applies the concepts to new areas of inquiry?  Is a universal story across domains necessary?  What is it?
Well done Guibert ~  aha the benefit of discussing these matters freely!
We are in agreement on most all you summarized...  what you have described is that ......
Complex Adaptive Systems  come in two sets ~ closed and open systems
Open systems can be adaptive or evolutionary ~ the second description matches open systems with evolutionary potential ~ eg watershed ecosystems
Closed systems can be adaptive but not evolutionary ~ the first description matches closed systems with adaptive potential... eg engines and motors (diesel and petrol are liquid energy ~modern engines can read and adapt to conditions)
Lets add the other important attributes we discussed earlier ~ how individual agents maybe helpful in analyzing closed systems; however, when dealing with open systems, communities not individuals are the key players....
In future it may be necessary to avoid the term "adaptive" in system science bcos it has become such a buzz word in business and commerce ~ referring to adaptive management within the neo-liberal monetary agenda. This use of "adaptive" does not match what we are discussing... it means a business management strategy for dealing with change... not the performance of the system...
In which case, the 1970's terminology where the primary classification of systems is closed or open ~ is more appropriate ~ Whether the systems are adaptive or evolutionary is a matter to be resolved subsequently ~ if important?
What is most important is to ensure that the appropriate scientific methods are employed for closed systems (alpha numerics) or open systems (intractable mathematics with geospatial imagery/intelligence systems...
• asked a question related to Complexity Theory
Question
I am interested in the processes of diffusion and sustainability of innovations and finding the connections between actions that enable and inhibit further adoption beyond the first wave of early adopters of proven elearning innovations in universities?
I've looked at the Fishbone; and adored it as highly comprehensive. The only thing I really saw as missing was HR structure and process to recruit the right people for innovation/adoption. But I have a broad quibble as to whether the fishbone structure  isn't a generally applicable one, around technology adoption, rather than specific to Higher Education, an observation I feel highly relevant. The point is, in HE, I've seen ineffective technology as a barrier to adoption and I've seen resistance to change, but HE has a particular enabling factor for resistance to change, that you won't find elsewhere - Academic Freedom
• asked a question related to Complexity Theory
Question
Most people speak about, and work on, complexity science, systems science, cybernetics, and complex thinking (i.e. Morin) as though they were the same. Although a sort of demarcation criteria has been repeatedly worked out between normal science and complex theory, very little (if any) work has been done on demarcation criteria regarding the above.
I think that complexity can exist in every science. In other words, complexity approach just changes the philosophy and methodology but not the object of study.
For example, what is complexity in economics and social sciences? It is a deviation from the standard economic assumptions, like a collection of fully rational identical agents interacting only via markets and living in the world without space.
Complexity in economics emerged in Santa Fe Institute (USA, New Mexico). Here are some features as given by its creators.
Brian Arthur, Steven N. Durlauf, and David A. Lane describe several features of complex systems that deserve greater attention in economics (see https://en.wikipedia.org/wiki/Complexity_economics ):
Dispersed interaction—The economy has interaction between many dispersed, heterogeneous, agents. The action of any given agent depends upon the anticipated actions of other agents and on the aggregate state of the economy.
No global controller—Controls are provided by mechanisms of competition and coordination between agents. Economic actions are mediated by legal institutions, assigned roles, and shifting associations. No global entity controls interactions.
Cross-cutting hierarchical organization—The economy has many levels of organization and interaction. Units at any given level behaviors, actions, strategies, products typically serve as "building blocks" for constructing units at the next higher level. The overall organization is more than hierarchical, with many sorts of tangling interactions (associations, channels of communication) across levels.
Ongoing adaptation—Behaviors, actions, strategies, and products are revised frequently as the individual agents accumulate experience.
Novelty niches—Such niches are associated with new markets, new technologies, new behaviors, and new institutions. The very act of filling a niche may provide new niches. The result is ongoing novelty.
Out-of-equilibrium dynamics—Because new niches, new potentials, new possibilities, are continually created, the economy functions without attaining any optimum or global equilibrium. Improvements occur regularly.
• asked a question related to Complexity Theory
Question
Say we have a complex network made of n sub-networks and m nodes. Some of the sub-networks share some of the m nodes. Say that such complex network (aka Interdependent Network) is under attack, and say that this attack is not targeted (e.g., does not look for high degree nodes only) nor random, but spatial (both low degree and high degree nodes are being removed). Now, say that the failure cause is external, in addition to being spatial, and that it can feature many levels of spatial extent. Hence, the higher the level, the higher the number of nodes involved, the higher the disruption (theoretically). My problem relates to the failure threshold qc (the minimum size of the disrupting event that is capable of producing 0 active nodes after the attack).
My question: does the failure threshold qc depend on how nodes are connected only (e.g., the qc is an intrinsic feature of the network)? Or is it a function of how vast the spatial attack is? Or does it depend on both?
Thank you very much to all of you.
Francesco
Yes it's likely to depend on the structure of the network, as it does for a simple network. The situation you describe is quite specific, so I'm not sure the answer is known. The best way is to try to find out!
You might find this review helpful, for some basic ideas http://arxiv.org/abs/0705.0010
• asked a question related to Complexity Theory
Question
The 2013 complexity conference hosted by the Nanyang Institute of Technology contained the following excerpts :
"The 21st century," physicist Stephen Hawking has said, "will be the century of complexity." Likewise, the physicist Heinz Pagels has said that "the nations and people who master the new sciences of complexity will become the economic, cultural, and political superpowers of the 21st century."
General systems theory was thought to be the "skeleton of science" (Kenneth E.Boulding)
Is "multidisciplinary" and "interdisciplinary" subsumed under "transdisciplinarity"?
Does "transdisciplinarity" imply "universality"? Is it very different from the notion of "consillience" (coined by Edward O Wilson)
Would take these words by common sense:
"multidisciplinary" means concerning more than one (=multi) disciplines,
"interdisciplinary" means concerning between disciplines without fixed amounts,
"transdisciplinarity" means going out of borders into an other discipline,
"universality" means valid all over the world and
"consilience" is the desired goal of every new defined terms in science.
Don't  think complex - think simple to solve complexity!
• asked a question related to Complexity Theory
Question
From a design and operational systems perspective: Is complexity always a "bad thing"? and, should simplicity be always preferred over complexity? To what extent the complexity vs simplicity debate influences systems design? Support your answer/comments with examples.
Dear Francisco et Al:
I see other answers to the issue posed above, that in my view relate more to the system than to the model for assessing the system. So let me provide another alternative/complementary explanation to the one suggested above, which focus more on the complexity of the system, and not the complexity of the model.
From a physical POV, complexity is many times understood as the quality that enables building systems away from thermal equilibrium. Since thermal equilibrium is the state of maximum entropy [or disorder], complexity is many times considered to be equivalent to neguentropy [Bertalanffy, Foerster,...].
Neguentropy is a concept that relates mainly to organization. Any organization implies an expenditure in energy to be built [Lovelock] and creates a difference in information [which allows to understand why Shannon uses Entropy formula for measuring information].
But structures can be stable or instable; can tend to steady states or to critical points. In fact some authors question whether there are any steady states or all real systems tend to Self Organized Critical States [SOC, Bak], where collapse happens as an endogenous feature [i.e., collapse is generated by the usual dynamics that explain the system].
SOC theory is interesting and the economic system has been put as an example of a SOC system.
And relating to this issues, I am finishing a review of the economy of UE-28 countries that may shed some light to the above question from an empirical basis, which we can relate to two issues:
When we review countries’ evolution from classical economic indicators like GDP, employment,… SOC dynamics are clearly present. It seems like the economic cycle tends always to critical states and from time to time a collapse [crisis] is necessary [i.e, cannot be avoided] to adjust the system state [an easy eplanation is understanding that when too fragile structures are built too large, they necessarily collapse]. But…
When we use different variables, SOC dynamics do not appear, and the countries that best perform under classical economic indicators show steady equilibrium states using this other variables…
This is highly interesting. It relates to something many times discussed: the importance of the oberver as whom choses the variables to be reviewed. It also allows us to understand that a sustainability indicator shall be that which value can be stationary in an economic crisis, not which show SOC behaviour.
Of course, those two issues are very important, but they are not related to the issue discussed here, So.. which is the interest? Ok, let me explain it now...
Any economic activity implies creating structures [i.e., departure from thermal equilibrium], or in other words, implies neguentropy or Entropy reduction.
And any collapse [as SOC critical episode] implies a neguentropy destruction; a reduction in activity [hence approach to thermal equilibrium].
From a physical point of view, the first is equivalent to what the majority of scientists would consider an increase of the complexity of the system and the second to a reduction in the complexity of the system.
So, if we assess countries using classical economic indicators, we are indirectly measuring sort of their complexity [of course, not all their complexity]. But some of such complexity may be essentially fragile; the next economic crisis it will collapse with high probability [this could be consider as an example of what many people dessignates as 'bad complexity'].
Yet, if we assess countries using sustainability indicators, we are indirectly measuring ‘the part of their complexity’ that can be sustained over time; i.e., the part that can stand a crisis without collapsing [it somehow would be equivalent to what many people considers 'good complexity']
Hope this example provides some ‘empirical content’ to the issue asked above and I am sure that Francisco [a Spaniard as me] is convinced of the important of the above nowadays that Spain is struggling to overcome a devastating economic crisis, largely due to building too unstable structures [sort of 'bad complexity'...]
Regards.
• asked a question related to Complexity Theory
Question
I am looking for a series of datasets (raw-data) in brain (EEG or MEG) or cardiac (HRV) activity of patients under homeopathic treatment. We want to research the phase transition of the human holistic complex system with modern tools of nonlinear analysis and complexity theory.
• asked a question related to Complexity Theory
Question
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required.  It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
Anthony, there are other posts that I've made where it is determined that the sequence is a de Bruijn sequence.  What is most important about my work is the speed with which such a sequence can be generated; for 100 million digit de Bruijn sequence, my software produces same in less than 30 minutes.  Also, while random is not computable, it is also true that random is biased from maximal disorder.  Thanks for the reply.
• asked a question related to Complexity Theory
Question
Hello dear fellow researchers.
This might sound like a stupid question :-)
Suppose for a given sequence of $n$ numbers and a zero-nonzero pattern, we know the existence of a (real) matrix admitting the sequence as it's eigenvalues and obeying the pattern. My question is: what's the complexity of building such a matrix (i.e. the second part  of the IEP)? I hope the answer is NP-hard!
Any help will be appreciated,
Many thanks,
Bahamn
What is zero-nonzero pattern? Without that, wouldn't it suffice to take a zero matrix, and put eigenvalues on the diagonal? Let me try to guess a possible approach. If some non-diagonal elements are nonzero, I guess there might be some workaround (like approximating a zero with some epsilon, and some sensitivity analysis to get the exact values of eigenvalues). To deal with zeros forced in some places on a diagonal, some more difficult workaround might be needed.
• asked a question related to Complexity Theory
Question
I am looking for dataset of complex networks with ground truth communities. SNAP has few dataset but these dataset are very large. I am looking for dataset with moderate size i.e. of few thousands nodes.
The ground truth is one way of verifying community detection algorithms. The use of scaling pattern of far more small communities than large ones is another way, a radically different way. This is because complex networks are like human brains which are hard to decompose, whereas simple networks are like mechanical watches that are decomposable.
Jiang B. and Ma D. (2015), Defining least community as a homogeneous group in complex networks, Physica A: Statistical Mechanics and its Applications, 428, 154-160, Preprint: http://arxiv.org/ftp/arxiv/papers/1502/1502.00284.pdf
Jiang B., Duan Y., Lu F., Yang T. and Zhao J. (2014), Topological structure of urban street networks from the perspective of degree correlations, Environment and Planning B: Planning and Design, 41(5), 813-828, Preprint: http://arxiv.org/abs/1308.1533
• asked a question related to Complexity Theory
Question
We are looking for Chinese, Indian and Russian mathematicians with a passion for fractals and the M set, who would be interested in being interviewed for the film we are making called 'Beyond the Colours of Infinity'. We have attached our pitch doc herewith.
The original film 'The Colours of Infinity' (1995)can be viewed on on Vimeo via this link:
You may need to enter the password: fractalfun to view it.
Japanese mathematicians : Hiroshi Fujii , Hiroshi Kawakami both also on RG
• asked a question related to Complexity Theory
Question
There are many nested (partially or fully) communities in a complex network. Taking a country for example, cities are communities within the country, neighborhoods are communities within the cities, and families are communities within the neighborhoods. Eventually, there are far more small communities than large ones, or the community sizes demonstrate power laws or heavy tailed distributions, which we have empirically verified.
Jiang B. and Ma D. (2015), Defining least community as a homogeneous group in complex networks, Physica A, 428, 154-160
Jiang B., Duan Y., Lu F., Yang T. and Zhao J. (2014), Topological structure of urban street networks from the perspective of degree correlations, Environment and Planning B: Planning and Design, 41(5), 813-828.
However, communities detected by our algorithm hardly match to those by previous community detection methods. My question is, should communities be nested?
Dear Bin Jiang,
Nested communities provide benefits akin to entanglement in quantum theory, greatly augmenting the utility of spaces and resources, because of the possibility of duality, multiplicity that a, decoupled ,non-nested structure lacks. Nevertheless, nested structures may also increase the fragility of the system exponentially. The crucial issue is whether the nested structure has the right kind of diversity and interconnection/intertwining structures.
I tend to think in metaphorical terms of bloatware versus "goodware" in software. Software by definition consist of nested algorithms, algorithms that run recursively. Bloatware multiplies these nested structures needlessly (approximates 1/f noise?) while "goodware" maintains just the right level of complexity and diversity in it. That is why thinking about cities and social networks from an information-theoretic perspective can make typical problems more tractable. Piece-meal, incremental improvements added to a complex system without understanding the implications of these actions (through simulations- to consider the different possible trajectories) can lead to self-organized criticality (SOC)- Bak et.al sandpile model comes to mind.
Cheers
• asked a question related to Complexity Theory
Question
As we strive to explain real-world complex systems, more parameters, variables and processes are needed in our models, thus we are less able to manage and understand our system. To overcome this "vicious circle" some authors suggest to start by defining and mapping (measuring) the complexity of the system, so as to determine a manageable degree of complexity. This involves answering the question "how much complexity is enough?". But, is this the right approach to study complexity? and if so, how may we practically accomplish these tasks?
I would like to suggest a distinction before trying to answer in a simple way  to the question.
In a scientific study,  you can try to manage as many data you can put in play and arrange them in a complex model of system description.
In the operational practice, if you have to decide on something important you may find it difficult to make proper reference to more than 3 main key factors at a time. This is the limit of the human brain and if the 3 key factors are not sufficient , you change model, then another one and so on, to deal with complexity.
Complexity is connected with  the life, and simplicity is a desirable objective in a model: I deem it necessary, also in the complex analysis, to utilize simple models arranged or combined in a complex way.
That is: complexity remains, in my opinion  and  in most cases of life,  at the end of the game, a complex arrrangement of individually simple elements and descriptive models. When you feel to have  fully dominated the true complexity then you might find, often,  that you are in the process of  falling in a complex descriptive mistake.That is the intriguing part of our scientific knowledge development : it never ends and complexity is to be considered as the inexhaustible feeding factor for new goals!
Kind regards
Alberto
• asked a question related to Complexity Theory
Question
I am investigating the use of a mathematical Category Theory to explore a deductive model of the emergence and evolution of cooperative structures in human organizing. I am aware of work by Ehresmann & Vanbremeersch. Is there other related work or some alternative formulations?
This is a good question without an obvious answer.
An almost mathematical approach to answering this question is the whimsical proposal of a category Reality introduced in
R.E. Lauro, Beyond the colonization of human imagining and everyday life: Crafting mythopoeic lifeworlds as a theological response to hyperreality, Ph.D. thesis, University of St. Andrews, Scotland, 2012:
See page 76, based on Baudrillard 1972 book.
On RG, see the works of S.A. de Groot:
In particular, see
S.A. de Groot, In search of beauty.  Developing beautiful organizations, Ph.D. thesis, Technische Universiteit Endhoven, 2014.
• asked a question related to Complexity Theory
Question
It is often said in the field of complex systems that such systems achieve self-organisation through simple order-generating rules. Under what conditions is such self-organisation achieved?
This is a good question.    In addition to what @Kamal and @Nizar have incisively observed, there are a few more things to consider.
An overview of different types self-organizing systems is given in
J.A.C. Gomez, Self-organization in biology: From quasispecies to ecosystems, Ph.D. thesis, Unversidad Carlos III de Madrid, 2010:
See Section 1.3 (Self-organization, universality and scaling), starting on page 6.  The challenge is for any entity to survive (and flourish) when it finds itself in situations in between order and disorder.  A very detailed introduction to self-organization in biological systems is given in Section 1.5, starting on page 8.
A very detailed introduction to conditions (and rules) required for self-organizating is given in
C. Gershenson, Design and control of self-organizing systems, Ph.D. thesis, Vrije Universiteit Brussel, 2007:
For example, in typical swarming rules, agents start to move with varying speeds  towards the centre of a swarm, while mutually adjusting their velocities to avoid collisions (see Section 3.2, starting on page 24, especially page 27).    This view of swarming behaviour carries over to artificial self-organizing systems such as a swarm of robots (see Section 3.5.1, starting on page 33) and self-organizing traffic lights (see Section 5.1, starting on page 62).
• asked a question related to Complexity Theory
Question
If we ever have to move something, could we carry and amplify using laser force?
The answer is yes. Read about optical tweezer (http://en.wikipedia.org/wiki/Optical_tweezers). Steve Chu won Nobel Prize in Physics for his work in this area.
• asked a question related to Complexity Theory
Question
Take a look at recent work by our group, led by Dr. Kaushik Sinha, on quantifying the level of structural complexity of man-made complex systems. The research suggests the existence of a "P-Point" where the topology changes from being tree-like or hierarchical to more distributed or networked. Any thoughts would be welcome. dWo Prof. Olivier de Weck
attached paper from ASME 2013
Nice work, especially the link with modularity is interesting. The "tipping point" transition you are referring to could possibly be a structural phase transition similar to that which we observed in real-life 'self-organising' systems:
• asked a question related to Complexity Theory
Question
Complex systems consist of multiple interacting components. Two components is not enough to make a system complex. But would three or four components be enough for a system to become complex? What would be an example of such a system?
I thouht I answered yesterday. My point is that "complexity" is one way the Observer observes the System and is not a property of Systems (to be complex or not...).
For instance a gas watched as a thermodynamical System is a complex System while watched as the ensemble of, say, 1024 molecules can be at most a complicated System and is described with the tools of Statistical Mechanics. From the Thermodynamical point of view a gas (or whatever else) could be thought of even as a continuous it has not to be viewed as composed by descrete particles.
Any object can be complex if described as a whole and described by its overall properties, or be considered composed by n sub-systems if we aim at describing its properties as some result of the properties of the sub-systems.
• asked a question related to Complexity Theory
Question
In literature, I found that the relation between refractive index and temperature is said to be quasi linear with a negative slope and again it says that both values are fitted linearly with r2= 0.9998. My concern is in one hand it says that values are related quasi linearly and in other hand they are doing linear fitting to it. How it is so.
Fitting a linear model and something being truly linear in itself are two different things. A purely linear process will best described by mean coefficients in a linear equation and corresponding noise terms, with all residuals equally and independently distributed over all levels of measurement.  There are many ways for data to fail or to come close to failing these conditions.
r2 is often called the "coefficient of determination" and/or "proportion explained," but both of these phrases leave out the important premise that the meaning of this proportion is always conditional on the expectation that your data is best construed as a homogeneously distributed sum of squared differences from a population mean. Fitting a linear model basically means tiling your empirical distribution so that all data points are best approximated by the corners of a series of squares, the opposite corner of which is exactly the mean.
If you have nonlinear data, it is entirely possible and even likely to get a strong correlation coefficient for reasons more to do with chance than with the underlying process.
If you are working with quasilinear data you have a few popular options:
1) shrug and just fit the linear model, because that's the easiest for everyone to think about
2) fit a higher-order polynomial to pick up the nonlinear curvature--this is especially good to do if you have theoretical interest in the interpretation of the linear term. Higher order polynomials are all linear-modeling methods of decomposing nonlinear trajectories--important to distinguish between the empirical trajectory and the sort of model you are projecting on the empirical trajectory.
3) try a spline fit if you have a good algorithm or theoretical motivation for where to split the trajectory for separate linear regressions.
Best of luck,
Damian
• asked a question related to Complexity Theory
Question
In the past, it was believed that the nature of languages like any other systems are constant and static; however after several years of study and with reference to the Chaos/Complexity theory (C/CT), they came to the conclusion that languages are complex, nonlinear and unpredictable in their underlying levels. I have read Larsen-Freeman's (1997) article about C/CT and it is believed that there are issues in SLA that can be illuminated by the chaos/complexity theory. Any ideas and elaborations about it?
Complex Systems Theory can be brought to bear on several aspects of instructed language learning/language acquisition. Some approaches include attempts to reconceptualise language as a complex system, while others have approached this by applying complexity on the psychological aspects of language acquisition, or by thinking of educational settings where language learning takes place as complex systems. It has also been suggested that Complexity can be used to 'bridge' different, yet compatible, theoretical perspectives that have informed the discipline.
While complexity is a promising new paradigm for Applied Linguistics, there are some challenges to be faced. For instance, it seems that transferring Complex Systems Theory without modification from the natural sciences to SLA/Applied Linguistics is not the best way to go. Rather, it appears that we need to 'nativise' it, and this begs the question of how far we can stray from the original theory to suit our purposes, while still claiming to work in the domain of Complexity.
If your are interested in exploring the literature, an obvious starting point would be Larsen-Freeman & Cameron's (2008) Complex systems and Applied Linguistics, and the Special Issues in Applied Linguistics (2006) and the Revista Brasileira de Linguistica Aplicada (2013) [it's in English]. Some names to look out for are: Diane Larsen-Freeman, Lynne Cameron, Sarah Mercer, and some of the more recent work by Zoltan Dornyei. If you come across anything I have written on the topic, that ,too, may be of some use.
• asked a question related to Complexity Theory
Question
I have been using ideas from complexity theory (basins of attraction, butterfly effect, emergence, non-predictability, etc.) as metaphors to help clarify issues of grief and mourning. I would like to be able to move away from only using these ideas a metaphors and begin to use them to model grief and mourning, but don't have a clue as how to go about doing this.
I could list variables that we suspect impact how people grieve (expected or unexpected loss, quality of the relationship, past losses, definitions of the relationship, etc.) but beyond that I haven't a clue.
Any suggestions -- including those saying why this is a fools errand -- would be appreciated.
mgs 7/6/2014: I changed the title because when we look at all the issues a person who has experienced a serious loss (say a woman whose husband has died) must face, those issues include not just things we think of as grief, but more practical issues such as how to live without the deceased (cooking for one, paying taxes, fixing the car....) So since we are attempting to look at all the factors impacting a person's reaction to a loss, we need to include those "restoration-oriented" issues as well.
Michael,
On your last statement - Complexity and definitive causality are surely not the best of buddies. And that's precisely why its a holisim rather than reductionism thrives in such cases.
I have very limited expertise in the field but I have tried to use Agent Based models for systems that have a human element but can be codified in terms or rules and actions of agents. ABM has been applied in economics, I've done some work to apply the same in Financial risk and more recently in Cyber security (IT systems) - in all these cases the common is the human element of impact.

Grief and mourning is similar, with the exception that the number of governing factors will be multiple fold more. Starting from what the weather is like, to what the neighbours think of the situation - the list will go on. But as long as a definitive list can be made, they can be modelled in my opinion.
The bigger challenge in my mind; as you rightly said; is the way to measure Grief. The one big difference in other systems where I've seen ABM being used is that the measure of success (or measure of the quntity of interest) was simple - commercial growth, number of cyber attacks, financial losses, number of car accidents etc. But that unfortuntely is not the case in your situation.
• asked a question related to Complexity Theory
Question
There is a controversy concerning whether an electronic circuit modeling a dynamical system constitutes a physical experiment or not. Some people argue that it is not, since the electronic circuit is merely an analog computer with some small fluctuations, while the numerical simulations are carried out in a digital computer. From this point of view, the results obtained with the electronic circuit are not different from the ones obtained with the computer simulation. Do you share this point of view ? Could you justify on the contrary that the electronic circuit indeed constitutes a physical experiment ?
Let me give my opinion based on realization theory. Suppose one wants to "realize" a certain dynamical system, possibly nonlinear. A modern way of doing it is to write the corresponding equations (which will be structurally and parametrically uncertain) and to simulate it on a digital computer. Other option, that dates back (at least) to the 1950s is to build a physical system that "realizes" the given equations. The physical system can be mechanical, electrical, electronic etc. Because of the ease of construction and, perhaps, accuracy, realization is best achieved using electrical/electronic components. That is the main reason why analog computers are (or use to be) electrical/electronic. Now, once a circuit OR a mechanical OR any other physical system has been built to realize a set of equations it is clear that such a realization will also have structural and parametric uncertainties. So, my opinion is that any physical system can be considered as a realization of a theoretical set of equations and in each case there are uncertainties. The point is that usually with electronics the uncertainties are less, thats all. Of course, in practice we move in the other direction: we start from a system (electronic or not), we postulate a set of equations for it (in the case of white-box modeling) and then estimated parameters. If the system is electronic, we stand a better chance to postulate equations that are closer to the true dynamics. In this case, again, it is easier to figure out what equations describe an electronic circuit. If the modeling is black-box (where the equations and not only the parameters must be estimated also), then the differences are even more subtle. In short, an electronic circuit is just as physical an experiment as any other, BUT with less uncertainties involved.
• asked a question related to Complexity Theory
Question
In literature, for proving the complexity of scheduling problems people use 2 partition and 3 partition problems. Can any one please help me to distinguish these problems for proving the complexity.
A common mistake is to believe that the polynomail reduction used in the theory of the complexity to define NP-completeness does preserve the strong sense. This is absolutely false :
- By definition of NP-complete, ANY NP-complete problem can be reduced to ANY other NP-complete problem.
- A reduction from a NP-complete problem in tre strong sense, say 3-Partition, does not prove that your problem is NP-complete in the strong sense. It simply proves that your problem is NP-complete. In particular there does exist a reduction from 3-PARTITION to PARTITION.
- Conversely a reduction from a NP-complete problem in the ordinary sense does not prevent your problem to be NP-complete in the strong sense. In particular there does exist a reduction from PARTITION to 3-PARTITION.
To preserve the strong sense, you need more than the usual reduction. In their book Garey & Jonhson define around page 100 the notion of pseudo-polynomial reduction for this purpose. Basically it ensures that in the transformation, the maximum value does not grow exponentially and the size of the encoding is not shrink exponentially.
• asked a question related to Complexity Theory
Question
What is the difference between NP, NP-hard and NP-Complete and when P turn to NP in complexity of computational problems.
Thank you.
• asked a question related to Complexity Theory
Question
What are the general characteristics of a problem in PSPACE?
I wouldn't say "why is PSPACE required". If you choose not to study it, that's entirely up to you; one wants to stand on the shoulder of giants when doing research. Theoreticians have figured out that problems sitting in PSPACE have certain properties. For example, problems in P and NP all are in PSPACE. It is of interest to those working on intractable problems that may be exhaustive in nature (may have an EXP like behaviour). Since we know EXPSPACE is not in PSPACE, showing it in PSPACE is very useful. Keep in mind that PSPACE means that there exists an algorithm for which the problem (its decision variant) can solved in a polynomial space with respect to the input size.
• asked a question related to Complexity Theory
Question
Edgar Morin is a Sociologist and a Philosopher, and Emeritus Director of Research at the Centre Nationale de la Recherche Scientifique (CNRS) in France. He has concentrated on developing a method that can meet the challenge of the complexity of our world and reform thinking, preconditions for confronting all fundamental, global problems.
Very recently he published a new article on "Complex Thinking for a Complex World – About Reductionism, Disjunction and Systemism". I really would like to share his last publication with you all as many of the questions here in RG can find a useful and comforting answer in his own words and vision. I attach the paper for your own sake. I resubmit this question since apparently the first submission I did cannot be seen by others but me.
Hi everyone, just to say that the "method" advanced by Edgar Morin is not a "method" in the traditional meaning of the term, but it is just based on the relativity of different methods and therefore deals with uncertainty. That is what we have to face nowadays. In this sense, if you go through what Morin says you will find much to think about even though it does not represent the summary of Morin's thought.See for example: " Method: The Nature of Nature" that is the first of several volumes exposing Edgar Morin's general systems view on life and society. The volume maintains that the organization of all life and society necessitates the simultaneous interplay of order and disorder. All systems, physical, biological, social, political and informational, incessantly reshape part and whole through feedback, thereby generating increasingly complex systems. For continued evolution, these simultaneously complementary, concurrent, and antagonistic systems require a priority of love over truth, of subject over object, of Sy-bernetics over cybernetics.
I personally think that the human perspective of Morin's though could be the key for our survival on Earth. See, for comparison, the article by Donella Meadows*, that is not so far away from Morin's thought. There is a kind of convergence that should not surprise us, because who is able to have visions of great scope, is also able to operate synthesis that have many common features.
• asked a question related to Complexity Theory
Question
How can we write the recurrence for the Euclidean algorithm to calculate the GCD of two numbers?
Ggt(a, b) = ggt(b, a mod b) for a>=0, b>0
• asked a question related to Complexity Theory
Question
Does complexity theory equal chaos theory?
EDIT: after this note, the author removed the tag "Computational Complexity Theory" and added the more appropriate tag "Complexity Theory".
Just a quick note (I saw that this question has been (wrongly?) tagged with "Computational Complexity Theory): "complexity theory" studies complex systems; the relations with chaos theory are quickly explained in the Wikipedia entry (http://en.wikipedia.org/wiki/Complex_systems#Complexity_and_chaos_theory). "Computational complexity theory" is a very different field; it is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. For a quick introduction see the Wikipedia entry: http://en.wikipedia.org/wiki/Computational_complexity_theory or a good book like M.Sipser, "Introduction to the Theory of Computation".
• asked a question related to Complexity Theory
Question
The concept of complexity increasing over uninterrupted evolutionary time is an area of dispute in the literature. The evidence for it seems to be mixed. Since theory can help reduce uncertainty, I was wondering about the range of theoretical reasons that have been proposed to explain why complexity appears to increase over evolutionary time.
There is an answer in Steve Gould's (1996) book "Full house" and it goes like this:
There is a threshold of minimum complexity for full-functioning organisms which was crossed (once?) when life originated but never during the latter course of biological evolution (otherwise a living organism would have degraded to non-living matter).
Starting from a set of organisms with the lowest possible complexity, steps of evolution lead equally often to (a) a more complex organism, (b) an organism of similar complexity, or (c) a less complex organism (if possible).
If you now run a simulation of evolution, e.g. a Monte Carlo experiment, and implement the boundary conditions mentioned above, you will find that over time an extremely right-skewed complexity distribution arises with the bulk of organisms (e.g. bacteria) close to the threshold of lowest complexity and a few organisms becoming increasingly complex (maximum complexity increases passively over time).
Gould says that human view on evolution is biased and that we (wrongly) percieve an overall complexity increase over time because we tend to neglect the bulk of organisms that has stayed close to the threshold of lowest complexity over billions of years.
• asked a question related to Complexity Theory
Question