Science topic

# Complexity Theory - Science topic

Explore the latest questions and answers in Complexity Theory, and find Complexity Theory experts.

Questions related to Complexity Theory

I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.

You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045

I am grateful if anyone can give me informative suggestions.

We see at least two very dangerous features in post-covid China:

1) As we show in the attachments JEE 2020 and ICC 2020, Governments should be very carefull in trying to control prices or fix maximum/minimum thresholds. The price dynamics in complex economies condense a lot of scattered information, are an emergent property, and -under certain circumstances- aid at correcting disequilibria (see the papers attached). Becuase of cumulative wrong centralized decisions, disequilibria are multiplying in the Chinese economy (industrial, construction, energy sectors) and authorities are not allowing prices to show up as correcting re-adjustment signals. This is really dangerous as we show in papers ICC 2020 and JEvEcs 2020 attached (by Almudi et al.).

2) Secondly, as we show in Metroeconomica 2020 (also attached), in a context of increasing prices (shortages of energy and post-covid bottlenecks in global value chains), with high stocks of private debt, and everything developing within an otherwise innovative economy with low (but increasing) interest rates, the probable slight increase in inflation rates expected for the upcoming months will unchain a domino effect, with emergent "big rips" in the socio-economic Chinese system.

1) and 2) may announce a long (a decade) stagnation in the China economy. It seems that the European Union is perhaps the latest worldwide agent in noticing this. China is no longer a clear option. Still, Is China too big to fail?

Business management (and operations) has many intertwined aspects, which constantly interact with each other, raising the complexity of it, as a 'system'. Modelling a complex system is difficult due to dependencies and adaptive behaviour.

**However, such complex 'systems' self-re-organise and become sustainable. A close-related****indicates that a change in the initial conditions can bring out randomness, even with deterministic laws. Though the***concept of chaos***are interrelated and multi-disciplinary in nature, very less application found in business management research.***chaos and complexity theories*The onset of Covid-19 pandemic has presented a unique social context for

**.***chaos and complexity*Fellow scholars of this RG are requested to highlight:

a) recent trends in research in this area (how

**is measured, analysed?).***chaos*b) recent applications of

*chaos and complexity theories**in the field of business management*c) modelling techniques, related to

*chaos and complexity theories*During my PhD studies I was wondering at what point a system becomes complex or even chaotic. I conducted a laboratory study under defined boundary conditions. I conducted my research on fairly homogeneous rock samples. So I could predict pretty well how the samples will behave. Then I read a publication about fractals in geomechanics, the complexity of systems and chaotic behaviour. Obviously complex or chaotic behaviour increases with the number of variables and uncontrollable factors.

But what is the point where a system becomes complex? Is it a matter of the size? Is it a matter of the number of variables? Is it a matter of the view point? Is there any quantification when a system becomes complex or even chaotic?

Can we apply the theoretical computer science for proofs of theorems in Math?

As we know, the Cauchy integral formula in the theory of complex functions holds for a complex function which is analytic on a simply connected domain and continuous on its boundary. This formula appears in many textbooks on complex functions.

My question is: where can I find a generalization of the Cauchy integral formula for a complex function which is analytic on a multiply connected domain and continuous on its boundary?

Can the complexity theory solve complete or partially problems in Math?

Hi,

I am currently doing a study about using and learning English in multilingual cities (e.g. Sydney, Australia; Auckland, NZ). I am particular interested in how the big L1 community and frequent exposure to the L1 using environment could influence people's English language development when studying and living in a multilingual city.

Is there an existing theory of framework about this topic or learning and using English in multilingual society?

I am preparing to write an article about the development of the basic assumptions or way of thinking toward organisations and people that underpin Strategic Management.

About the development of Strategic Management, I read some articles and books relating to system theory, and complexity theory. Can anybody recommend some book or article that includes systematic descriptions about the development of Strategic Management and the characteristics of each phase. Thank you very much.

**A)**As evolutionary-NeoSchumpeterians (or complexity-oriented) economists, we conceive the economy as a dynamic system in which scattered heterogeneous and boundedly-rational agents interact. Local and global interactions involving feedbacks and domain-specific connections involve producing, investing, consuming, distributing incomes, trading in general, learning, innovating, entry/exit, etc. And the ongoing development of the specifc dynamics we propose to explore a problem generate "EMERGENT PROPERTIES".

**B)**The methodologies we use range from verbal logical arguments (which of course can be genuinely complex) to complex ABMs, passing through non-linear highly stylized models, replicator dynamics and evolving complex networks with the afore-mentioned components.

**C)**The specific methodology used is not innocous. Thus, whereas verbal arguments involving real complexity are often almost inestricable, ABMs are a bit more enlightening (the less so the higher the scale), and, in my opinion, the subset of low-scale ABMs, enriched-replicator dynamics, networks and non-linear styled complex models are the best. They often even allow for closed-form quasi-exhaustive analysis.

**D)**The problem is how should we pass from the results we obtain in our theory, to the posing of policy recommendations to be implemented within a reality which we perceive as emerging from a complex system?

Notice that there are two sources of complexity (2 complex realms involved):

**1)The inherent complexity of the real system**under scrutiny.

**2)The often black-boxed complexity of the theory we propose**.

We know that even small differences between two evolving complex systems can make a huge difference in their outcomes. If we assume (as we should) that we can never access the "real complex mechanism underlying reality" (just we should aspire to approach it, at least in social sciences), we should be very prudent in our policy prescriptions.

**E)**The

**solution**prescribed by those using simple models (mainstream economic models or simple statistical models) is not valid, since they begin by assuming that

**reality is SIMPLE**(instead of complex), and they falsely avoid the problem. Why should social reality be simple in its functioning? The historical record of crisis and social distorsions, and the analogies with natural systems point out to a clear failure of the standard approach. Thus, if we accept complexity:

How do you address

**the issue of double complextity 1) and 2)?**Synchronization and memory costs are becoming humongous bottlenecks in today's architectures. However, algorithm complexities assume these operations as constant, which are done in O(1) time. What are your opinions in this regard? Are these good assumptions in today's world? Which algorithm complexity models assume higher costs for synchronization and memory operations?

Is there any quantum entanglement based solution to simulate the dynamics of classically interacting three bodies?

Stepwise multiple regression is used to assess the extent to which a dependent variable can be predicted by the combination of variables. It computes and identifies which variables contribute to explaining and predicting the dependent variable. It generates the R, R Square values and so on to indicate how much variation the combination of variables or an outstanding variable account for. Can someone who holds Chaos or complexity theory to view how a foreign language is learnt use stepwise multiple regression? Does that make sense? For example, I am aware that there are too many factors, such as the learners' cultural backgrounds and reasons for learning, which influence second language learners’ attitudes towards their teachers’ instruction. These factors and perhaps with other unknown factors interact with one another in complex systems, and the interactions and their results are unpredictable. The purpose of using statistic methods is to understand part of the story between some factors. Besides, stepwise multiple regression does not intend to measure a factor by isolating it from others. Does this make sense?

Or, at least, he/she can use Chaos or complexity theory to discuss the findings?

So far, min-max optimization has several methods to prove its complexity. Do you have any suggestions to prove min inside min optimization complexity?

Let say we have implemented an algorithm and wrote down the execution time of that algorithm while changing the input value. How can we conclude the cost function of that algorithm ?

Hello,

This project aims are to address the theory of dynamic systems from the pedagogy point of view or these intend to study the possibilities for the reformulations, for the re-conceptualizations ... of pedagogy from the perspective of the theory of complex systems?

In any cases, I thing that this project is very interesting and useful too for the knowledge society.

If I misunderstood, please give me some details about the objectives of your proposed project.

Sincerely,

Bogdan Nicolescu

It seems that the paradigm of the Social Determinants of Health is no longer enough to explain health - the dynamics of the disease. Is it time to propose new and better models of explanation?

Food is multidimensional. In order to understand what is food and what does it mean only a holistic approach seems suitable, which means putting together knowledges and methodologies from disciplines like history, economy, sociology, anthropology, psychology, agronomy, nutrition, ecology and so on. But this also means be able to deal with all these approaches at the same time. So, is there any academic research that tries to link complexity and systems theory with food research ?

I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):

minimize (1,1,...,1)' e

(Y-Zx) > -e

-(Y-Zx) > -e

where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:

-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')

-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')

where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).

But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?

The Complexity Theory was developed in the 1970s (that is, almost 50 years ago) of the last century with the goal of classifying algorithms according to the degree of difficulty in their execution on computers. The degree of difficulty is understood as the number of elementary operations (NEO) (addition, subtraction, multiplication, division, exponentiation, and so on), which must be used when searching for the exact (optimal) solution of a given combinatorial model. Moreover, it should be emphasized that this NEO is evaluated for the worst case of the initial data. That is, this NEO is the upper limit of the complexity of this model, which is guaranteed to solve the problem within this time limit. Classes "P" and "NP" were introduced. The class "P" marks all combinatorial algorithms for which the NEO is estimated by some polynomial from the parameter "n", for example O (n ^ 3), where n is the total number of different initial data in the problem. The class "NP" marks all combinatorial algorithms for which the NEO is estimated by some exponential function a ^ n, for example 2 ^ n. It is believed that theoretically, algorithms from class P are good algorithms, but algorithms from the NP class are bad algorithms. However, this theory has not been developing for decades, we can say that it has practically stopped in its development, and we hardly notice anything new. From the point of view of practice, there is a point of view: a good algorithm from class P is not always evaluated as an effective algorithm, since there are restrictions imposed on the time limit for execution. For example, an algorithm with complexity O (n ^ m) becomes impractical even at values m>10, and also an algorithm with complexity O (n ^ 6) becomes impractical even at large values of n>1000. In order to solve this problem within a given time limit, various approximate algorithms are proposed, which can be divided into two categories. The first category is ε-algorithms that can solve the problem with a given ε-accuracy. Such algorithms produce approximate solutions A (D) = OPT (D) * (1 + ε), where 0 <ε <= 1, A(D) is an approximate solution that produces an approximate algorithm "A" for the initial data D, OPT(D) is the exact (optimal) solution. For such algorithms, the complexity of the execution can be expressed, for example, as O ((n ^ 3) / ε) for a given accuracy ε.The second category is heuristic algorithms, for which the result is unpredictable in advance. Among this category there are remarkable theoretical results of the form A(D) <= a * OPT(D) + b, where "a" and "b" are real constants, a>= 1, b>= 0. Here, the expression A (D) <= a * OPT(D) + b is valid for all kinds of input data D and is in fact the worst case for all possible cases of D. It should also be noted that the number of ε-algorithms is very small. Even fewer known heuristic algorithms with theoretical results are A(D) <= a * OPT(D) + b. Thus, in practice we are dealing with heuristic algorithms with an unpredictable result of the approximate solution A(D). We introduce the metric q = 100% * (A(D) - OPT(D)) / OPT(D) as the measure of closeness of the approximate solution A(D) to the optimal solution OPT(D). Without loss of generality, we can say that q = 100% * (a - 1). In particular, if q = 0, then the solution A(D) = OPT (D). Let's imagine that some heuristic algorithm "A" produces a sequence of solutions A(D, t_0), A(D, t_1), A(D, t_2), ... A(D, t_k) at instants t_0, t_1, t_2 , ... t_k, where A(D, t_0)> A(D, t_1)> A(D, t_2)> ...> A(D, t_k), t_0 <t_1 <t_2 <... <t_k <= T within specified time limit. Here A(D, t_0) represents the initial solution at time t_0. The question arises: how much the solution A(D) = A(D, t_k) is close to the optimal OPT(D), which is unknown? Sometimes there are cases when the initial solution A(D, t_0) is the optimal solution, but the heuristic algorithm does not know anything about it and the process of searching for new solutions continues until the time limit has elapsed. That is, we see a situation when the heuristic algorithm works "in a blind mode". It is important to understand one thing here: finding a solution is not an end in itself, although it is also important to find a good solution. More importantly, it is right to understand the process of finding a solution in order to stop the search in a timely manner at a time when further improvement is impossible. And here we are faced with the problem of estimating OPT (D) "from below" as the Lower Boundary LB(D) <= OPT(D). Let's imagine that some exact algorithm LB generates a sequence of lower bounds LB(D, t_0), LB(D, t'_1), LB(D, t'_2), ... LB(D, t'_k ') at the moments time t'_0, t'_1, t'_2, ... t'_k ', where LB(D, t'_0) <LB(D, t'_1) <LB(D, t'_2) <... < LB(D, t'_k '), t'_0 <t'_1 <t '_2 <... <t'_k' <= T ', T' is the specified time limit. Here LB (D, t'_0) represents the initial lower bound at time t'_0.

As an Exact Algorithm, LB can be considered, for example, the widely known method of "branch & bound", which works very well in practice. As another exact LB algorithm, you can use the Linear Relaxation technology and other special methods to find the lower bounds. Here the general method can consist in reducing the original model to another one and with other initial data D ', for which the condition OPT(D') <OPT(D) is proved, after which the lower bound LB(D') <= OPT(D') will be found. We denote the best lower bound LB(D) = LB(D, t'_k ') will be found and define the metric p = 100% * (A (D) - LB (D)) / LB(D). Obviously, p>= q, since LB (D) <= OPT(D). Thus, even if you do not know the optimal (exact) solution OPT(D) we can estimate the approximate solution A (D) within the time limit T + T '. We can draw two imaginary curves in a Cartesian coordinate system. The first curve LB (D, t) will be increasing and pass through the points {LB (D, t'_0), t'_0)}, {LB (D, t'_1), t'_1}, {LB (D, t'_2), t'_2,}, ... {LB (D, t'_k '), t'_k}. The second curve A (D, t) will be decreasing and pass through the points {A (D, t_k), t_k}, ... {A (D, t_2), t_2}, {A (D, t_1), t_1}, {A (D, t_0), t_0}. The abscissas of the curves LB (D, t) and A (D, t) will form a sequence LB (D, t'_0) <LB (D, t'_1) <LB (D, t'_2) <... <LB (D, t'_k ') <= OPT (D) <= A ( D, t_k) <... A (D, t_2) <A (D, t_1) <A (D, t_0). Now let's formulate the problem: Suppose a Time Limit T is given. We need to find such points {LB (D, t'_x '), t'_x'} and {A (D, t_x), t_x} on the curves LB (D, t) and A (D, t ) to t_x + t'_x <= T, in order to the value p '= 100% * (A (D, t_x) - LB (D, t'_x')) / LB (D, t'_x ') would be minimal. In other words, for a given Time Limit T we must:

1. Find the approximate solution A (D, t_x) within the time limit t_x

2. Find the lower bound LB (D, t'_x ') within the time limit t'_x'

These two quantities will form the quality of the solution p'. By changing the time limit T we can control the quality of the solution: the more T is, the less will be p' (that is, the quality of the solution will be better). If we will use the entire Time Limit to find only an approximate solution, then this will not change anything as it is done now. That is, we still do not know anything about the quality of the solution. If it turns out that A(D, t_x) = LB(D, t'_x '), it means that we found the optimal solution, about which the heuristic algorithm did not know anything before. We introduce the class E (Effecive Algorithms) for algorithms for which problems can be solved in the way described above. That is, it is necessary to develop not one algorithm as in Complexity Theory, but two algorithms:

1. Heuristic Algorithm for finding for an Approximate Solution

2. Exact Algorithm for finding a Lower Bound

In class E, all algorithms have the same metric p', which is calculated depending on a given Time Limit T. By default, one can specify t_x = 0.5 * T and t'_x '= 0.5 * T. Class E is much more closer to practice and gives a clearer understanding of the algorithm's quality criterion. In this class, the difference between classes P and NP disappears: all algorithms of this class have only one criterion: the minimum value of p' within a given Time Limit. And if this is ensured, then the algorithm with small values of p' is effective, otherwise it is ineffective regardless of the value of the approximate solution found. In a word, the decisive role here is not so much the search for only a good approximate solution (although this is of no small importance) but rather a search for an integrated solution "Approximate Solution & Lower Bound", from which we can judge about quality of Approximate Solution. So, we can summarize with one short expression:

**class E vs class P & NP**.The final question is:

**should we review the traditional Complexity Theory and consider instead the Efficiency Theory?**I’m currently involved in a research project that is related to Highly Integrative Questions (HIQ’s).

To define the landscape of those "next level client questions" we initiated a research:

How to define HIQ’s?

How to approach HIQ's?

What are cases that relate to HIQ’s?

How can we learn from those cases?

What kind of guidance and facilitation are needed in the process?

Some buzzwords: Complexity Theory, Integrative Thinking, Social Innovations

I am able to reproduce Pyragas' results from his 1992 paper for the Rössler system operating in the spiral regime (e.g. a=0.2, b=0.2 and c=5.7). However, it has been much harder to find situations where an UPO of that system operating in the funnel regime (e.g. a=0.28, b=0.1 and c=8.5) is stabilized using Pyragas' delay control law. I am able to stabilize period-one UPOs but have not found UPOs with longer periods yet. Has anyone information (references) on this problem?

Thank you in advance.

Looking through the literature, I realized all the proofs for NP- hardness of QIP are based on the claim that Binary Quadratic Integer Programming is NP- hard. Is that true?

Can Fractal Theory Explains the Urban Fabric ? How can be the applications in Traditional Muslim Settlements ?

Can Chaos Theory Explains the Urban Fabric? How can be the applications in Traditional Muslim Settlements?

The structure of this problem is similar (not equal) to other problems that admits simple solutions. Maybe, the colleagues of this community could help me in identifying a solution to this problem.

From the look of the demarcation criteria Does the geocentric theory turn out to be non-scientific versus the heliocentric cosmological theory? Is science no longer a vision surpassed by another?

is distributed gradient algorithm could be considered as a game theory method?

It is discovered in the abstract intelligence theory [Wang, 2009; Wang et al. 2017] of intelligence science that AI may merely carry out imperative, reflexive, iterative and recursive intelligence. However, more sophisticated human intelligence such as those of cognitive, causal and inductive intelligence will hardly be implemented by the traditional computational power, because none of the advanced forms of aforementioned intelligence are iterative and the sizes of recursions are normally infinitive.

Further information may be found in:

Wang, Y. (2009), On Abstract Intelligence: Toward a Unified Theory of Natural, Artificial, Machinable, and Computational Intelligence, International Journal of Software Science and Computational Intelligence, 1(1), 1-17.

Yingxu Wang, Lotfi A. Zadeh, Bernard Widrow, Newton Howard, Françoise Beaufays, George Baciu, D. Frank Hsu, Guiming Luo, Fumio Mizoguchi, Shushma Patel, Victor Raskin, Shusaku Tsumoto, Wei Wei, and Du Zhang (2017), Abstract Intelligence: Embodying and Enabling Cognitive Systems by Mathematical Engineering, International Journal of Cognitive Informatics and Natural Intelligence, 11(1), 1-15.

How to prove a optimization problem is NP-hard, especially when co-channel interference is considered. I will be greatly grateful that someone could give me an example. It will be better if the example is in non-orthogonal multiple access (NOMA) scenarios.

The experimental results than Leonard Adleman obtained in 1994 while using DNA to compute the direct Hamiltonian path problem with just 7 nodes do not seem encouraging. It took him several days of lab work to complete the experiment. Furthermore, although DNA computation offers a potential in terms of data storage, there is also the issue of molecular volatility.

Can biological computation provide a better alternative to classical computation?

I understand that chaos theory focuses on deterministic chaos, and evolutionary algorithms is stochastic. But are there techniques that apply to both? For instance, when my solution fails to converge are there ways to visualize the system and look for things like attractors that might indicate problems?

Several studies have addressed the theory of complexity in a theoretical way, advancing in the discussions about the boundaries of this theory. However, when analyzing empirically the complexity in organizations, the difficulties are many. I am conducting a survey of the best

**quantitative methods**for empirically studying complexity in organizations. What do you think about it? What empirical research would you indicate to me?In my view the world is full of real systems which represent one side of a coin .The other side of same coin represents complex systems while reflecting real spectra . As we are human beings we can see only real nature of a system reflecting stable real spectra .

Prof B.Rath

Curriculum Integration has been one of the most complex theories for me, as an educationalist, to understand. Integrating the curriculum seems to entail multiple aspects of the educational context (e.g., historical, philosophical, economics, etc.) Understanding how these aspects interact to each other is of key importance to achieve a truly integrated curriculum. One of these interactions has caught my attention: the conflicts between curriculum integration and power relationships. I would like to comprehend what are the political aspects of curriculum integration, Especially in those highly hierarchical disciplines such as medical education.

In biological systems, how can we say this is chaotic or random behaviour(for example genome behaviour). is entropy useful

The diagonal elements are non-zero. So inverse of the matrix is easily computed by taking the reciprocals of each elements. Is this the complexity O(n)?

Can anyone refer me to research exploring the application of complexity theory to the criminal justice system? Or the application of complexity theory to a complex social phenomenon i.e. human trafficking?

Suppose, I've to solve

**a problem**which**consists**of**2 NP-Complete problems.**Now I would like to know what will be the

**complexity class**for the new**problem**?Can any one suggest me any paper regarding this topic?

Thank you in advance.

I have identified some 50 CAS concepts commonly used by authors in the paper. They range from those derived thru chaos theory and agent-based modeling, to self-organization of agents as they interact and co-adapt, to emergence, etc. I have created one brief story that weaves together clusters of these concepts and then another brief story that weaves together the clusters.

How important is it to have a coherent story as one applies the concepts to new areas of inquiry? Is a universal story across domains necessary? What is it?

I am interested in the processes of diffusion and sustainability of innovations and finding the connections between actions that enable and inhibit further adoption beyond the first wave of early adopters of proven elearning innovations in universities?

Most people speak about, and work on, complexity science, systems science, cybernetics, and complex thinking (i.e. Morin) as though they were the same. Although a sort of demarcation criteria has been repeatedly worked out between normal science and complex theory, very little (if any) work has been done on demarcation criteria regarding the above.

Say we have a complex network made of

*n*sub-networks and*m*nodes. Some of the sub-networks share some of the*m*nodes. Say that such complex network (aka**Interdependent Network**) is under attack, and say that this attack is not targeted (e.g., does not look for high degree nodes only) nor random, but spatial (both low degree and high degree nodes are being removed). Now, say that the failure cause is external, in addition to being spatial, and that it can feature many levels of spatial extent. Hence, the higher the level, the higher the number of nodes involved, the higher the disruption (theoretically).**My problem relates to the failure threshold qc (the minimum size of the disrupting event that is capable of producing 0 active nodes after the attack).****My question**: does the

**failure threshold qc**depend on how nodes are connected only (e.g., the qc is an intrinsic feature of the network)? Or is it a function of how vast the spatial attack is? Or does it depend on both?

Thank you very much to all of you.

Francesco

The 2013 complexity conference hosted by the Nanyang Institute of Technology contained the following excerpts :

"The 21st century," physicist Stephen Hawking has said, "will be the century of complexity." Likewise, the physicist Heinz Pagels has said that "the nations and people who master the new sciences of complexity will become the economic, cultural, and political superpowers of the 21st century."

General systems theory was thought to be the "skeleton of science" (Kenneth E.Boulding)

Is "multidisciplinary" and "interdisciplinary" subsumed under "transdisciplinarity"?

Does "transdisciplinarity" imply "universality"? Is it very different from the notion of "consillience" (coined by Edward O Wilson)

From a design and operational systems perspective: Is complexity always a "bad thing"? and, should simplicity be always preferred over complexity? To what extent the complexity vs simplicity debate influences systems design? Support your answer/comments with examples.

I am looking for a series of datasets (raw-data) in brain (EEG or MEG) or cardiac (HRV) activity of patients under homeopathic treatment. We want to research the phase transition of the human holistic complex system with modern tools of nonlinear analysis and complexity theory.

Thank you for your attention.

There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.

If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.

I have a very well defined counter example.

Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.

Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required. It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.

In what way are these results at odds, or in keeping, with Kolmogorov?

Hello dear fellow researchers.

This might sound like a stupid question :-)

Suppose for a given sequence of $n$ numbers and a zero-nonzero pattern, we know the existence of a (real) matrix admitting the sequence as it's eigenvalues and obeying the pattern. My question is: what's the complexity of building such a matrix (i.e. the second part of the IEP)? I hope the answer is NP-hard!

Any help will be appreciated,

Many thanks,

Bahamn

I am looking for dataset of complex networks with ground truth communities. SNAP has few dataset but these dataset are very large. I am looking for dataset with moderate size i.e. of few thousands nodes.

We are looking for Chinese, Indian and Russian mathematicians with a passion for fractals and the M set, who would be interested in being interviewed for the film we are making called 'Beyond the Colours of Infinity'. We have attached our pitch doc herewith.

The original film 'The Colours of Infinity' (1995)can be viewed on on Vimeo via this link:

You may need to enter the password:

**fractalfun**to view it.There are many nested (partially or fully) communities in a complex network. Taking a country for example, cities are communities within the country, neighborhoods are communities within the cities, and families are communities within the neighborhoods. Eventually, there are far more small communities than large ones, or the community sizes demonstrate power laws or heavy tailed distributions, which we have empirically verified.

Jiang B. and Ma D. (2015), Defining least community as a homogeneous group in complex networks, Physica A, 428, 154-160

Jiang B., Duan Y., Lu F., Yang T. and Zhao J. (2014), Topological structure of urban street networks from the perspective of degree correlations, Environment and Planning B: Planning and Design, 41(5), 813-828.

However, communities detected by our algorithm hardly match to those by previous community detection methods. My question is, should communities be nested?

As we strive to explain real-world complex systems, more parameters, variables and processes are needed in our models, thus we are less able to manage and understand our system. To overcome this "vicious circle" some authors suggest to start by defining and mapping (measuring) the complexity of the system, so as to determine a manageable degree of complexity. This involves answering the question "how much complexity is enough?". But, is this the right approach to study complexity? and if so, how may we practically accomplish these tasks?

I am investigating the use of a mathematical Category Theory to explore a deductive model of the emergence and evolution of cooperative structures in human organizing. I am aware of work by Ehresmann & Vanbremeersch. Is there other related work or some alternative formulations?

It is often said in the field of complex systems that such systems achieve self-organisation through simple order-generating rules. Under what conditions is such self-organisation achieved?

If we ever have to move something, could we carry and amplify using laser force?

Take a look at recent work by our group, led by Dr. Kaushik Sinha, on quantifying the level of structural complexity of man-made complex systems. The research suggests the existence of a "P-Point" where the topology changes from being tree-like or hierarchical to more distributed or networked. Any thoughts would be welcome. dWo Prof. Olivier de Weck

attached paper from ASME 2013

Complex systems consist of multiple interacting components. Two components is not enough to make a system complex. But would three or four components be enough for a system to become complex? What would be an example of such a system?

In literature, I found that the relation between refractive index and temperature is said to be quasi linear with a negative slope and again it says that both values are fitted linearly with r2= 0.9998. My concern is in one hand it says that values are related quasi linearly and in other hand they are doing linear fitting to it. How it is so.

In the past, it was believed that the nature of languages like any other systems are constant and static; however after several years of study and with reference to the Chaos/Complexity theory (C/CT), they came to the conclusion that languages are complex, nonlinear and unpredictable in their underlying levels. I have read Larsen-Freeman's (1997) article about C/CT and it is believed that there are issues in SLA that can be illuminated by the chaos/complexity theory. Any ideas and elaborations about it?

I have been using ideas from complexity theory (basins of attraction, butterfly effect, emergence, non-predictability, etc.) as metaphors to help clarify issues of grief and mourning. I would like to be able to move away from only using these ideas a metaphors and begin to use them to model grief and mourning, but don't have a clue as how to go about doing this.

I could list variables that we suspect impact how people grieve (expected or unexpected loss, quality of the relationship, past losses, definitions of the relationship, etc.) but beyond that I haven't a clue.

Any suggestions -- including those saying why this is a fools errand -- would be appreciated.

mgs 7/6/2014: I changed the title because when we look at all the issues a person who has experienced a serious loss (say a woman whose husband has died) must face, those issues include not just things we think of as grief, but more practical issues such as how to live without the deceased (cooking for one, paying taxes, fixing the car....) So since we are attempting to look at all the factors impacting a person's reaction to a loss, we need to include those "restoration-oriented" issues as well.

There is a controversy concerning whether an electronic circuit modeling a dynamical system constitutes a physical experiment or not. Some people argue that it is not, since the electronic circuit is merely an analog computer with some small fluctuations, while the numerical simulations are carried out in a digital computer. From this point of view, the results obtained with the electronic circuit are not different from the ones obtained with the computer simulation. Do you share this point of view ? Could you justify on the contrary that the electronic circuit indeed constitutes a physical experiment ?

In literature, for proving the complexity of scheduling problems people use 2 partition and 3 partition problems. Can any one please help me to distinguish these problems for proving the complexity.

What is the difference between NP, NP-hard and NP-Complete and when P turn to NP in complexity of computational problems.

Thank you.

What are the general characteristics of a problem in PSPACE?

Edgar Morin is a Sociologist and a Philosopher, and Emeritus Director of Research at the Centre Nationale de la Recherche Scientifique (CNRS) in France. He has concentrated on developing a method that can meet the challenge of the complexity of our world and reform thinking, preconditions for confronting all fundamental, global problems.

Very recently he published a new article on "Complex Thinking for a Complex World – About Reductionism, Disjunction and Systemism". I really would like to share his last publication with you all as many of the questions here in RG can find a useful and comforting answer in his own words and vision. I attach the paper for your own sake. I resubmit this question since apparently the first submission I did cannot be seen by others but me.

How can we write the recurrence for the Euclidean algorithm to calculate the GCD of two numbers?

Does complexity theory equal chaos theory?

The concept of complexity increasing over uninterrupted evolutionary time is an area of dispute in the literature. The evidence for it seems to be mixed. Since theory can help reduce uncertainty, I was wondering about the range of theoretical reasons that have been proposed to explain why complexity appears to increase over evolutionary time.

Why conservative chaotic flow has less importance than strange attractors?

Are there measures for complexity of organizational structures, processes, networks or supply chains? And does it even make sense?