Science topic

Complexity - Science topic

Explore the latest questions and answers in Complexity, and find Complexity experts.
Questions related to Complexity
  • asked a question related to Complexity
Question
5 answers
I reviewed many articles about the neurodegeneration of the human brain. Many researchers pointed out that complexity is a key feature in detecting alterations in the brain. However, the characteristic changes that occurred in the electrophysiological signals may depend on different factors that are shared by the subjects in the same group ( patient or control). Researchers apply some statistical tests to show obtained results are significant but I'm not sure whether the results are really related to the proposed research hypothesis ( Complexity changes with disease). We do not know that the subjects in one group drink coffee regularly while other group members do not. There are many possibilities like this for the people who participated in these experiments.
Now, this is my question;
What methodology can be utilized to ensure our hypothesis is REALLY true or not true at the end of the study? Do you have any suggestions to overcome this specific problem?
Thanks in advance.
Relevant answer
Answer
You touched a very tough subject that is penetrating the whole biology and medicine: the hypothesis creation and testing. Simply said, a combination of Complex Systems measures and AI together with machine is a very useful tool that can help to address similar problems to yours.
When you are really interested about complexity measures, which are employing entropy, their application to ECG (EEG is not much different), and subsequent creating and testing of Hypotheses using AI and ML, you can read the methodical paper on prediction of heart arrhythmia. The paper explains all from the first principles and have a rich list of related literature.
After reading it, when you find it useful, it would be possible to continue in our discussion, as the subject is really uneasy to grasp without thinking through it all.
Just one remark, any habits, comorbidities, and other health related features can be easily incorporated into AI & ML methods. I recommend reading papers on gut microbiota health/dysbiosis, which I shared in the project "Complexity in Medicine…" The research is linking gut microbiota dysbiosis to neurodegenerative diseases!
  • asked a question related to Complexity
Question
12 answers
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
Thanks in advance.
Relevant answer
Answer
You have to explain time and space complexity of your algorithm
  • asked a question related to Complexity
Question
7 answers
I have two type of resources A and B. The resources are to be distributed (discretely) over k nodes.
the number of resources A is a
the number of resources B is b
resources B should be completely distributed (sum of resources taken by nodes should be b)
resources A may not be completely distributed over the nodes. In fact, we want to reduce the usage of resources A.
Given resources (A or B) to a node enhance the quality of the nodes, where the relation is non-linear.
All nodes should achieve a minimum quality.
What is the type of the problem and how I can find the optimal value?
Relevant answer
Answer
Genetic algorithms never find an optimum. I hope you will use better tools than those. You should know that this forum has basically been hi-jacked by meta-heuristics enthusiasts - an inflated array of posts at RG are based on the fact that RG "scholars" do not know that there are better tools. So beware!
  • asked a question related to Complexity
Question
5 answers
Problem description:
In socio technical systems an idea of technological initiative can emerge and different groups can be organizing around it. Each groups little by litle are organizing sponaneously based on common interest, shared values including ethics, around of an idea of progress and potential benefits that sometimes is vague.
Sometimes those groups start to interact each other and at certain point of development a macro context start to be needed in order to reach the necessities of the society.
Lately despite of the potential social benefits of the new technological initiative, the political body do not create the institutional conditions for the development of a new regulation and public policy (this is what I call the macrosystem). So the socio-technological initiative do not thrive.
Some of the hypothesis about why this issue is happening are:
1) Politicians do not take care or interest of the posibilities of the new technology and initiative.
2) Politicians sees the new technology as a loss of self power threat.
3) Politicians want to take control of the different technical groups resources and assets but not the values and real purpose, because they want to have more power for themselves.
4)...
In consecuence the work done by different technical groups will never be enough organized and coordinated as well as is required by a common purpose that reach societal necesities.
What I want to do is describe the problem in terms of the interaction of technological working groups (the system) and the political and policy level (the macrosystem)
Do yo know if there are a systemic theoretical framework that can help me to analyse and describe this problem and dynamic?
Relevant answer
Answer
It is recommended to follow one quite useful part of complex systems modelling methodology: agent based modelling. There might be different types of agents, some in a few copies like politicians, other in larger numbers like society cliques. There might be defined engagement rules among different types of agents.
Dirk Helping is definitely having research on societal interactions. The seminal book on agent based modelling is by Illachinski: Artificial War. There is explained how to program agents to achieve their desired behavior.
Entropy measures can be used to classify the mode of behavior of the system. Each system undergoing a phase transition express it in changes of entropy even when globally there is nothing substantial to observe!
  • asked a question related to Complexity
Question
11 answers
If multiple deep learning (DL) algorithms are merged together to create models then the system will be complex.
To analyze this, how to calculate complexity?
Is there any formal way or mathematical proof to analyze this kind of complexity of DL models?
Thanks in advance.
  • asked a question related to Complexity
Question
4 answers
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
Relevant answer
Answer
Could you just say: my plate has plain plasta?
  • asked a question related to Complexity
Question
29 answers
Antigens are processed by specialized macrophages to produce complex protein-RNA complexes that eventually produce iRNA. When this iRNA is introduced to primary B cells that have never seen the antigen, they produce specific antibodies to that antigen. Th macrophage produced RNA is incorporated into the the genome of these B cells by reverse transcriptase that now become "memory cells" capable of a secondary response when confronted with the antigen they had never come into contact before. Reverse ttranslation in the macrophage is the best explanation for the production of such specific iRNA.
Relevant answer
Answer
Stanley Laham Your tritiated nucleic acid (radioactive) may provide the 1st clue. There are numerous hurdles to challenge the Central Dogma as it appears to be rock solid. Cracks with caveats are seen in the published literature with one publication discuss the potentiality using prion as an example.
  • asked a question related to Complexity
Question
10 answers
We all know that poor hygiene within dense populations is a breeding ground for diseases. On the other hand, we know that some kind of medium level of exposure to infectious agents on a daily basis is strengthening immunity and keeps it alert.
The big question is what happens when we start to use sanitizers en masse within whole populations?
Hospitals have special procedures aiming at the rotation of chemically-based sanitizers to avoid the rise and promotion of drug-resistant bacteria and viruses.
How is this problem addressed within whole populations? This seems to be important to know right now during the COVID19 outbreak.
Relevant answer
Answer
Dear Jiri,
you posted an intriguing question. Although preventing drug-resistance is reasonable and scientifically accurate, this very moment is delicate and we have to fully enforce hand sanitizing. Whole world population is now in a real dangerous time; scientific community has to promote public hygiene as hard as possible. In my opinion, sanitizers rotation is a very clever way to manage this emergency, and it should be encouraged.
Good luck with your work,
Dave
  • asked a question related to Complexity
Question
4 answers
How to estimate the Big O complexity of feature selection for filter method based on mutual information (such a, mRMR, JMIM)?
Relevant answer
Answer
Aparna Sathya Murthy Thanks so much for your helpful answer. But counting number of loop is not the basic as this paper (attached snapshot)
  • asked a question related to Complexity
Question
9 answers
I know how to dock the chemical compound with protein but i need server for metal complexes docking 
Relevant answer
Answer
Now i am able to do that with two program MOE and Molegro
  • asked a question related to Complexity
Question
6 answers
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type, byte[] data, int ITERS){ double results[] = new double[ITERS]; DETABECipher cipher = new DETABECipher(); long startTime, endTime; List list = null; for (int i = 0; i < ITERS; i++){ startTime = System.nanoTime(); list = cipher.encrypt(data, secLevel,type, policy); endTime = System.nanoTime(); results[i] = (double)(endTime - startTime)/1000000000.0; } return list; } public List encrypt(byte abyte0[], int i, String s, String s1) { AccessTree accesstree = new AccessTree(s1); if(!accesstree.isValid()) { System.exit(0); } PublicKey publickey = new PublicKey(i, s); if(publickey == null) { System.exit(0); } AESCipher.genSymmetricKey(i); timing[0] = AESCipher.timing[0]; if(AESCipher.key == null) { System.exit(0); } byte abyte1[] = AESCipher.encrypt(abyte0); ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree); timing[1] = AESCipher.timing[1]; timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5]; long l = System.nanoTime(); LinkedList linkedlist = new LinkedList(); linkedlist.add(abyte1); linkedlist.add(AESCipher.iv); linkedlist.add(abeciphertext.toBytes()); linkedlist.add(new Integer(i)); linkedlist.add(s); long l1 = System.nanoTime(); timing[3] = (double)(l1 - l) / 1000000000D; return linkedlist; } public static byte[] encrypt(byte[] paramArrayOfByte) { if (key == null) { return null; } byte[] arrayOfByte = null; try { long l1 = System.nanoTime(); cipher.init(1, skey); arrayOfByte = cipher.doFinal(paramArrayOfByte); long l2 = System.nanoTime(); timing[1] = ((l2 - l1) / 1.0E9D); iv = cipher.getIV(); } catch (Exception localException) { System.out.println("AES MODULE: EXCEPTION"); localException.printStackTrace(); System.out.println("---------------------------"); } return arrayOfByte; } public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[] paramArrayOfByte, AccessTree paramAccessTree) { Pairing localPairing = paramPublicKey.e; Element localElement1 = localPairing.getGT().newElement(); long l1 = System.nanoTime(); localElement1.setFromBytes(paramArrayOfByte); long l2 = System.nanoTime(); timing[3] = ((l2 - l1) / 1.0E9D); l1 = System.nanoTime(); Element localElement2 = localPairing.getZr().newElement().setToRandom(); Element localElement3 = localPairing.getGT().newElement(); localElement3 = paramPublicKey.g_hat_alpha.duplicate(); localElement3.powZn(localElement2); localElement3.mul(localElement1); Element localElement4 = localPairing.getG1().newElement(); localElement4 = paramPublicKey.h.duplicate(); localElement4.powZn(localElement2); l2 = System.nanoTime(); timing[4] = ((l2 - l1) / 1.0E9D); ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree); ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded(); localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey); timing[5] = ShamirDistributionThreaded.timing; return localABECiphertext; } } public ABECiphertext(Element element, Element element1, AccessTree accesstree) { c = element; cp = element1; cipherStructure = new HashMap(); tree = accesstree; } public void execute(AccessTree accesstree, Element element, ABECiphertext abeciphertext, PublicKey publickey) { pairing = publickey.e; ct = abeciphertext; PK = publickey; countDownLatch = new CountDownLatch(accesstree.numAtributes); timing = 0.0D; double d = System.nanoTime(); Thread thread = new Thread(new Distribute(abeciphertext, accesstree.root, element)); thread.start(); try { countDownLatch.await(); long l = System.nanoTime(); timing = ((double)l - d) / 1000000000D; synchronized(mutex) { } } catch(Exception exception) { exception.printStackTrace(); } }
Relevant answer
Answer
That's a hardware issue and nothing else. Best, T.T.
  • asked a question related to Complexity
Question
1 answer
Creative destruction rages on: the bureaucratic, the inflexible, and the slow are disappearing in record numbers. The average lifespan of a Fortune 500 company is less than 20 years, down from 60 years in the 1950s, and is forecast to shrink to 12 years by 2027. If organizational leadership is—at least in part—about identifying what needs to be done, creating vision, setting direction, and guiding people toward that, one should query whether the leadership styles that developed yesteryear (e.g., authentic, authoritarian, democratic, laissez-faire, paternalistic, servant, transactional, transformational, etc.) still contribute to the long-term success of collective effort in more open systems. Given the volatility, uncertainty, complexity, and ambiguity of the environment, understanding how to lead organizations in the Age of Complexity is urgent and important to societies, economies, and governments.
Relevant answer
Answer
Just be dictatorship.
  • asked a question related to Complexity
Question
3 answers
Volatility, uncertainty, complexity, and ambiguity call for different ways of perceiving the world, different approaches to sense and decision making, and different modes and combinations of leadership.
  • Administrative Leadership. Administrative leadership is the managerial approach followed by individuals and groups in formal roles as they plan and coordinate activities in standardized business processes to accomplish organizationally-prescribed outcomes efficiently and effectively.
  • Adaptive Leadership. Adaptive leadership is the informal process that emerges as organizations generate and advance ideas to solve problems and create opportunity; unlike administrative leadership, it is not an act of authority and takes place in informal emergent dynamic among interactive agents.
  • Enabling Leadership. Enabling leadership is the sum of actions to facilitate the flow of creativity (e.g., adaptability, innovation, and learning) from adaptive structures into administrative structures; like adaptive leadership, it can take place at all levels of an organization but its nature will vary by hierarchical level and position. (Uhl-­Bien, Marion, & McKelvey, 2007)
Relevant answer
Answer
Seems that corporate culture must also be considered in order to answer the question. Culture can have a synergistic or countervailing effect on the effectiveness of a given style of leadership.
One way culture could have an effect is by how change or threat is perceived by the organization. Is it seen as an opportunity? Does it trigger post traumatic flashbacks of past events ? Does it open or close the organizational dialog and operations?
I suppose I've added more questions rather than answered yours. But thank you for the provocative question.
Unamuno reportedly said : The truth is not merely the lawful, but rather that which provokes the mind and causes growth.
  • asked a question related to Complexity
Question
3 answers
If We want to compare the performance of two classifiers based on their time measurement criteria, what is the best way ?
Is there any existing methodology ?
Relevant answer
  • asked a question related to Complexity
Question
7 answers
"Mistake 2: Not building flexible career capital that
will be useful in the future" (80000hours.org Career guide). My question: What are the reasons behind this mistake?
--
Background
I'm trying to understand what the reasons that lead people to this mistake are. Maybe after we understand the reasons, we would realize that they're not mistakes after all. Or if we still believe they're mistakes, we'll be much able to solve them.
Here are some general reasons:
1. Trade-offs: Advice often neglects to address the trade-off that comes with this advice: For example, "be flexible" ignores the disadvantages of being flexible and the advantages of being "inflexible" (keeping your eye on the goal, avoiding distractions, persistence etc…) and vice versa with persistence advice like "never give up".
2. Unclear evidence or debatable positions
Often contrary or seemingly contrary positions both have evidence.
Do we underestimate or over-estimate the differences between us and others? The "False Consensus Effect" suggests that we under-estimate while the "Fundamental Attribution Error" can imply that we over-estimate the role of personal differences.
--
So even though the position behind the advice has evidence, it can also be true that the position contrary to the advice has evidence too.
3. Lack of knowledge or effort.
4. More pressing issues
The question then becomes: is : Does the advice that comes with "Not building flexible career capital that will be useful in the future" suffer from general reasons 1+2?
Here are the sub-mistakes of the main mistake:
Some of the reasons that cause people to fall into this mistake (based or influenced by the 80k section though not exactly how they say in all points):
1* Short-term thinking.
2* Not giving career choices enough thought (e.g. English PhD sounds nice so I’m just going to go with it).
3* Underestimating soft-skills: Not Investing in transferable skills that can be used in any job like
A- Deep Work by Calvin Newport. The example given in 80k career guide is writing your daily priorities. I would prefer something like "how to avoid distraction).
B-Learning how to learn
C- Rationality
4* Lack of awareness about automation threat.
5* Inaccurate predictions about one’s future interest/opportunities in the chosen career. (e.g) "The End of History Illusion":
So for example, for 5*, it could be the case that General reason 2 "unclear evidence" is implicated it could be (and I don't know it is) that in contrast to the "End of history Illusion", there is a group of personality theorists who claim that we under-estimate how stable our personality is. Or for 3*, general reason number 1 "trade-offs" is implicated. For example (and again I don’t know), it could be the case that the more you focus on developing general skills like "learning how to learn", you become less competitive in non-transferable technical skills because you have less time to focus on that now.
Relevant answer
Answer
Christopher Steinman
Very good points! I want to add them to the blog post as a comment. Are you ok with that? I'll quote you ofcourse.
  • asked a question related to Complexity
Question
3 answers
It would be nice to get reading succestions addressing the topic why modelling societies and their dynamic developments may fail. What are the challenges and pitfalls when one attempts to create models that aim to forecast the future developments. Economic literature, system dynamic approaches, predictive social science may address these issues with modelling. I'm looking for good entry points to these discussion.
Relevant answer
Answer
David Harold Chester I shall check the paper on your profile. Thanks!
Anamaria Popescu Yes I quess strategic literature would deal this questions also when they address why strategies tend to go awry. I will have to walk to the strategic leadership department to ask their views on this matter.
  • asked a question related to Complexity
Question
2 answers
Hi,
I'm preparing a PhD proposal to study an egocentric network in primary health care setting in a low-middle income country feed with a name-generator survey. I have 2 questions:
1. Any suggestion to the minimum sampling size to keep the validity?
2. What is the estimated time to do an (egocentric) social network analysis of a sample size of X?
Any suggestions, references?
Many thanks!
Virginia
Relevant answer
Thanks Víctor, will try to check it in the future.
  • asked a question related to Complexity
Question
13 answers
By analysis we mean that we are studying existing algos seeing their features applications, performance analysis, performance measurement, studying their complexity and improving them.
Relevant answer
Answer
time and space consumption during program execution
  • asked a question related to Complexity
Question
1 answer
What would be the generation time of E. coli growing at 37 degree C in complex medium?
It takes 40 minutes for a typical E. coli cell to
completely replicate its chromosome. Simultaneous
to the ongoing replication, 20 minutes of a fresh
round of replication is completed before the cell
divides. What would be the generation time of
E. coli growing at 37 degree C in complex medium? 
A.20 min                           B.40 min 
C.60 min                           D.30 min
Relevant answer
Answer
Maybe this article helps. In my opinion anything below 30min is insufficient even in complete medium.
  • asked a question related to Complexity
Question
1 answer
I have built the neurites using the option for the 'Run' dropdown option.
I have tried to 'set the order' of neurites (as 0 , with propagate) and used the neuite tracer to set the soma (this appears red).
I have also tried to ''set 1-selected as origin' from the Edit function dropdown but it still says there are mutiple origins in the model.
Relevant answer
Answer
I suppose, ANFIS method can be a good idea for Sholl Analysis with Z stack 3D images. I have used ANFIS for some similar works. You can take a glance at them.
  • asked a question related to Complexity
Question
8 answers
Dear friends and professors, I have a question. In fact, this is a question asked by a reviewer, and I have to address his/her concern.
I have coded a MIP model in the GAMS, and applied the Cplex as an exact method to solve instances. I am able to solve a relatively large-scale instance with about 23,000 decision variables and 4,000 constraints in less than a minute. I would like to ask for your clarification that how it would be possible to solve such a large-scale instance in less than one minute? Is there any specific reason?
His/her concern: "The model is really complex with thousands of constraints. I have difficulty in imaging that the exact method needs few seconds..."
I want to express my appreciation for your efforts in advance. In addition, I want to thank you for the privilege of your time.
Relevant answer
Answer
The time taken to solve a MILP depends not only on the size, but also on the gap between the LP bound and the optimum, and the proportion of the basic feasible LP solutions that are fractional. I have encountered MILPs with tens of thousands of variables that solve in a minute, and ones with only a few hundred variables that take hours.
  • asked a question related to Complexity
Question
4 answers
I’m currently involved in a research project that is related to Highly Integrative Questions (HIQ’s).
To define the landscape of those "next level client questions" we initiated a research:
How to define HIQ’s?
How to approach HIQ's?
What are cases that relate to HIQ’s?
How can we learn from those cases?
What kind of guidance and facilitation are needed in the process?
Some buzzwords: Complexity Theory, Integrative Thinking, Social Innovations
Relevant answer
Answer
@everyone thanks for the answers. This is exactly where our research is longing for.
Understanding Highly integrative Questions and complex system as prerequisite for innovating new services and products! I'm facinated by the Cynefin framework, because it shows different levels on how to approach innovations with systematic lenses...
@Brian I'm not really into deductive and inductive reasoning.
Integrative thinking in simple words is making an synthesis of A and B. It requires an ability of tension thinking for the synthesis. A good example can be found at the emergence of the Linux compay “Red Hat” and the thinking of their CEO Bob Young. It is also worth it to look at the work from Roger Martin.
  • asked a question related to Complexity
Question
3 answers
 In the reaction mixture, How to remove the excess addition of triethylamine?
Relevant answer
Answer
On the basis of different solubility of inorganic compounds and Et3N. Primary depends on the solubility of inorganic compounds. 
  • asked a question related to Complexity
Question
2 answers
If gold iii solution is preferred then how will prepare 0.05M solution in 30ml
Relevant answer
Answer
Typically Au(III) people we use K[AuCl4] as gold source for our chemistry. This salt is soluble in water and other polar solvents. Of course its difficult to suggest alternatives without knowing a bit more about the particular reaction that you want to carry out.
  • asked a question related to Complexity
Question
1 answer
We are trying to measure the concentration of Fe(III)EDTA complex which is converted to Fe(II)EDTA complex after a reaction. So, by this measurement, we will be able to calculate the extent of the reaction.
Relevant answer
Answer
Please a link..below:
Chelatometric Determination of Ferrous Iron with 2-Pyridinealdoxime as an Indicator, journal of Pharmaceutical Science ,September 1963 Volume 52, Issue 9, Pages 858–860
Another interesting PDF enclsoed ...
  • asked a question related to Complexity
Question
5 answers
I want to know that how to search the paper about the algorithm complexity. such as Genetic Programming, Kalman filter, Particle filter and  so on.  I can't search the paper about it. Thx.
Relevant answer
Answer
Given the usual choices (point mutation, one point crossover, roulette wheel selection) a Genetic Algorithms complexity is O(g(nm + nm + n)) with g the number of generations, n the population size and m the size of the individuals. Therefore the complexity is on the order of O(gnm))
  • asked a question related to Complexity
Question
2 answers
Thorin (IUPAC: Disodium 3-hydroxy-4-[(2-arsonophenyl)diazenyl]naphthalene-2,7-disulfonate)
Relevant answer
Answer
Thorin is a tridentate ligand; barium can coordinate two of them in an octahedral coordination. For discussion of tridentate azo compounds, see Comprehensive Coordination Chemistry, ed-in-c. Sir G. Wilkinson, Pergamon, 1987, vol. 6, Section 58.2.3 pp.46ff.
  • asked a question related to Complexity
Question
3 answers
I am looking for a library to be integrated in LabWINDOWS CVI.
Relevant answer
  • asked a question related to Complexity
Question
2 answers
When I want measured emission wavelength of solution of molecule by spectrofluorometer, that need  inter value of excitation wavelength .
Now, How I can determine and measured excitation wavelength of solution of complex or molecule?
Relevant answer
Answer
Yuri Kalambet 
Thank so much for reply, 
  • asked a question related to Complexity
Question
1 answer
I need it for a managing complex events for a transportation project
Relevant answer
Answer
We are using Odysseus (developed by researchers at University of Oldenburg) data stream management and processing engine for real-time analysis and feed-back mechanisms. It is freely available and generates high-order information adequately. It also provides flexible processing concepts and efficient resource management tools.
  • asked a question related to Complexity
Question
2 answers
And it is becomming more and more complex with time. 
Relevant answer
Answer
  • asked a question related to Complexity
Question
5 answers
Soil as a reference is very complex and need to be simplified for experimental purposes. What could be a true substitute for this soil medium.
Relevant answer
Answer
Very valid point Dr Senapati . I dont think  , there is any substitute to soil , considering  all complexities involved in unraveling the adsorption and exchange chemistry , associated with soil fertility dynamics and plant  nutrition . Another reason , i strongly feel , our commercial  crop growing medium is undoubtedly soil , studying crops behavior without soil , will again put the agriculture  practitioners in some state of dilemma about dos and donts...However, for experimental purpose , we can ease out  the complexities ... 
  • asked a question related to Complexity
Question
7 answers
Human thought works in a similar way wherever it is
Relevant answer
Answer
Ismail Noori Mseer Sir,
culture is the full range of learned human behavior patterns. Culture is also defined in terms of intercultural communication. Culture adds to the notion of communicative competence and enlarges it to incorporate intercultural competence. It builds understandings about  their own and others’ cultural traditions, values and beliefs. It involves processes that may lead to an enhanced ability to move between cultures and to cultural change. As Language teacher,culture can be viewed as the customs, traditions or practices that people carry out as part of their everyday lives. Culture refers to knowledge and skills that are more generalize in nature and transferable across cultures. This body of knowledge includes among other things, the concept of culture, the nature of cultural adjustment and learning, the impact of culture on communication and interaction between individuals or groups, the stress associated with intense culture and language immersions , coping strategies for dealing with stress, the role of emotions in cross-cultural, cross-linguistic interactions and so forth.
  • asked a question related to Complexity
Question
6 answers
It seems that human brain uses hierarchy, between other tools, to reduce complexity when analyzing information. What are the neurological basis of this fact?
Relevant answer
Answer
For example chunking. Whenever a pattern of symbols
repeats, it can be replaced by a single symbol.
If you see a phone number
0 1 2 3 4 5 6 7 8 9 7 3, you can easily remember
it although the amount of symbols seems to
exceed working memory capacity. With the help
of repeated chunking you get a tree-like
data structure.
Regards,
Joachim
  • asked a question related to Complexity
Question
3 answers
Hi, all, i want to get single crystal for my Ru(II) complexes. I really want to know how ? for this type of complexes
Relevant answer
Answer
Hi Sir.. i hope you can try a solvent mixture of DCM( dichloro methane) and aceto nitrile or acetone and aceto nitrile, but you have to make sure for slow evaporation process so that the crystals grow thick.
  • asked a question related to Complexity
Question
1 answer
Hi everyone,
I´m looking for an experimental task which can be altered for two conditions: High vs. low complexity/difficulty.
Paper-pencil as well as electronic is possible (ideally it is free of charge and easy to implement).
I´m looking forward to your suggestions and thank you all so much in advance!
Cheers,
mp
Relevant answer
Answer
Try to play with relations by adding/ removing. Most daily tasks are loosing/ gaining complexity by removing/ adding a relation between objects. (if you are not bonded by classic vs. technical)
  • asked a question related to Complexity
Question
1 answer
Hi,
A treble quantum system is a complex case, I have worked on such system but still I need more opinions and I want to share my acknowledgments with any interesting researchers to strength our back grounds in this field.  
Relevant answer
Answer
I think that a single quantum system is enough. You need to talk to Eric Verlinde. He is promoting information in primordial Qbits as the thing that got entangled and caused the universe and everything. It is a pity he had to put acceleration in by hand though.
  • asked a question related to Complexity
Question
17 answers
Dear colleagues,
It has been proposed that no single statistical measure can be used to assess the complexity of physiologic systems. Furthermore, it has been demonstrated that many entropy-based measures are "regularity" statistics, not direct indexes of physiologic "complexity". Finally, it has been stressed that increased/decreased irregularity does not imply increased/decreased physiologic complexity. However, it is common to keep finding interpretations such as, "a decreased/increased entropy value of beat-to-beat interval series (RR sequences) reflects a decreased/increased complexity of heart rate variability", and even more as "this reflects a decreased/increased complexity of the cardiovascular control". So, which entropy-based measures actually quantifies time series complexity? Moreover, is it appropriate to interpret that because of a decreased/increased complexity in heart rate variability there is a decreased/increased complexity of the cardiovascular control?
Thanks in advance,
Claudia
Relevant answer
Dear Claudia,
Sorry for posting into your topic belatedly. However, this is a quite interesting topic and I am also looking for deepen the understanding. Here is my humble opinion.
The theory of physiologic complexity, as you stated, is supported by the idea that healthy systems will always be the most complex ones, because the multilevel interactions (crosstalks) and regulating mechanisms are at best performance in such organisms/systems. Any breakdown or function loss in one or more of these regulatory mechanisms causes a decrease in the complexity. Therefore, diseases and aging is associated to complexity loss.
Some studies revealed that entropy can either increase or decrease in pathological situations. Considering the idea of physiologic complexity, entropy cannot be assumed as a complexity index. Moreover, surrogate data tends to increase entropy (compared to original signal) but the original signal is considered more complex than its surrogate, as it contains the information which was destroyed in surrogate data.
So, why some authors still associate entropy to complexity? Because complexity can be interpreted in different ways. If you consider that the complexity of the system can be solely characterized by the signal dynamics you are analyzing, and not by the system itself, you may assume that the higher unpredictable the signal, the higher the complexity. In this case, entropy can be considered a measure of complexity. This is often called "dynamical complexity".
If we refer to proper books and papers, we will find that "complexity" does not have an universal definition. Instead, people describe what properties the complex systems usually show, which basically are:
1) Many interdependent elements, interacting each other through nonlinear rules;
2) Structures over several scales;
3) Emergent behavior;
When we expand it to living organisms, this is clear that our systems (humans, animals) fit as complex systems. But, how could we detect this complexity from time series? From the many measurements proposed to extract information from those time series, e.g. asymmetry, fractals, entropies. And how many and which information are necessary to characterize complexity in time series? I think this is a very good question!
The final point is: can only heart rate variability reflect the system complexity? Although powerful, this is a single variable taken to characterize a very complex organism. It is more or less the same of reducing the dimension used to characterize some object in space. Information will be lost. Multivariate analysis may be more powerful, taking into account more than one variable concomitantly to assess the dynamics of the system. For example, one can record respiration, arterial pressure, ECG and EEG, simultaneously, and use some methodology to characterize the complexity. However, due to several limitations, in many situations it is not possible to collect more than the ECG, from which we can calculate heart rate variability.
  • asked a question related to Complexity
Question
1 answer
I am calculating average binding energy for protein-protein complex using g_mmpbsa.
I have used 0ns to 10ns and 10ns to 20ns and 500frames. I have got different average binding energy for both case. and the different is really significant. 
Relevant answer
Answer
Sir, My area is graph Theory .
  • asked a question related to Complexity
Question
8 answers
i.e. how to know that which particular algorithm suits the given problem to get the exact optimal solution with minimal time complexity
  • asked a question related to Complexity
Question
8 answers
Suggest T-symmetry Hamiltonians having real spectra .
Relevant answer
Answer
@Nikolay
H=0 does not make any sense .Pl  suggest another one.
Prof Rath
  • asked a question related to Complexity
Question
3 answers
In terms of Algorithm can anybody tell me about the significance of both runtime complexity.
Relevant answer
Answer
Dear Syed Taimoor Ahmed,
I think, NO. Because big-theta is a combination of big-o and big-omega. Big-o is used to show upper bound of the complexity of the algo. But  big-theta is used to show average case of complexity of the algo. It contain both upper and lower bound. 
So.
Big-theta(nlog n)=   
C1*n log n <= Big-theta(nlog n) <= c2* n log n
you can't write like that
C1*n log n <= Big-theta(nlog n) <= big-0(n log n)
  • asked a question related to Complexity
Question
3 answers
Complex 1 is [Cu(ip)2H2O](ClO4)2
Complex 2 is  [Cu(dppz)2H2O](ClO4)2
Based on absorption spectral techniques i have studied binding ability of these complexes with DNA. A reviewer has raised a question as why complex one shows two absorption peak and complex 2 shows one peak.
 kindly help me to get the answer.
Relevant answer
Answer
Dear Dr. Kumar:
The band centered in 252 nm is sharp and intense being completely resolved in wavelength domain. An additional band ascribed to the structural distortion of complex should be exbibits contrary features.
Yet in the thinking line, in which the simplest answer is the most probable, let’s suppose that you make use aqueous solution with classical relation.
It is possible that your compound exhibits a low solubility in water, verify.
In the above sense, proceed more investigations at about intrinsic optical features of DNA-solution (aqueous) investigating if DNA-solution have an absorption band at 250-265 nm, it is probable. Then, second band intense detected in spectrum centered at around 252 nm would belong to DNA-solution.
Best regards
Marcos Nobre
  • asked a question related to Complexity
Question
4 answers
Can anyone explain me what could had happened according to my above points.
Relevant answer
Answer
Question may be more clarified, is it OK?
  • asked a question related to Complexity
Question
2 answers
Hello Sir,
  1. How to find out Maintainability Index, Cyclomatic Complexity, DIT, through open source data ?
  2. I have done this work but i couldn't find Maintainability Index .What tool should I use to calculate Maintainability Index,Cyclomatic Complexity?
Relevant answer
  • asked a question related to Complexity
Question
6 answers
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?  
Can any one suggest me any paper regarding this topic? 
Thank you in advance. 
Relevant answer
Answer
That depends on what "more" means. :-) If "more" is a constant (finite) number then it means that you solve a finite number of NP-complete problems - which is still NP-complete.
  • asked a question related to Complexity
Question
5 answers
Hi, literature have shown using CVF, We are about to measure Managerial Complexity (Lawrence et al., 2009)
Any instrument for measuring employee's complexity?
Relevant answer
Answer
I have to agree with Xavier.  Questionnaires are easy, but not necessarily valid.
  • asked a question related to Complexity
Question
4 answers
Dear all,
         Where can I find the complex refractive index of Ge and Te in Terahertz (THZ) region like (0.1 to 2.5 THZ)?
          Thanks.
Relevant answer
Answer
Database, or academic articles. Just go searching with key words 'Terahertz database', or 'Terahertz semiconductor', etc.
  • asked a question related to Complexity
Question
5 answers
  1. Is constant of integration must be real?
  2. Suppose the constant of integration is complex then what happened?
  3. To next question, please see the file attachment
Relevant answer
Answer
Dear Colleague,
writing y^(1/2) = x+c and looking for such a solution where x and c would be real, then y^(1/2) should be real, and y should be positive as a square of a real valued function. If you have y(0) = -1 < 0, then this cannot happen.
Sincerely, Octav
  • asked a question related to Complexity
Question
2 answers
Dear researchers,
It is an usual trend that bimetallic complexes showed good cytotoxicity compared to the monometallic analogs. If trend reverse means what may be possible reasons. 
Relevant answer
Answer
Normally Bimetallic complexes posses greater cytotoxicity when compared to mononuclear complexes.  
The liphophilicity of bimetallic complexes  which increases the permiablity of the cell membarane which favours the good anticancer property. 
  • asked a question related to Complexity
Question
3 answers
Hello,
    I want to know what is the computational complexity of RBF SVM is it O(n^2) or O(n^3) and what is n here? Is it number of training data?
Thanks
Relevant answer
Answer
you can verify this by the condition number of the moment matrix
  • asked a question related to Complexity
Question
3 answers
In what sense is "irreducibly complex" synonymous with "NP-complete"?
In what sense is "complicated" just "computationally-intensive but solvable"?
Relevant answer
Answer
It appears to me that, in the paper you cited, Chaitin uses the phrase "irreducibly complex" as a synonym for "algorithmically random." This is quite different from computational complexity such as NP-completeness.  Algorithmically random is defined in terms of Kolmogorov complexity.  The prefix-free Kolmogorov complexity,  K(s) of a binary string, s, is defined to be the length of the shortest program that outputs s.   Prefix-free means that no program is a prefix of another program. We assume a fixed optimal computer. Changing optimal computers changes the Kolmogorov complexities of strings, but the change is bounded by a single constant for all strings.   Results in Kolmogorov complexity are usually stated "up to an additive constant"  just as results in analysis of algorithms are usually stated "up to a mupltiplicative constant"  (big O notation).  An infinite binary sequence, x, is algorithmically random if there is a constant b such that for all n, K(x[n]) >= n - b, where x[n] is the initial segment of x consisting of the first n bits of x. This means that the initial segments of x cannot be described using short programs. The programs must be almost as long as the strings themselves. (The programs can be shorter only by the additive constant b.)  Algorithmically random sequences cannot be computable.  A computable sequence x can be described by a program of finite length, so an initial segment x[n] can be described by the finite program plus a description of n, which can described using less than 2log(n)+O(1) bits.  This implies x is not algorithmically random.  However, although algorithmically random implies not computable, the converse implication doesn't hold.  For instance, if x=x0, x1, x2, ... is algorithmically random, x0, 0, x1, 0, x2, 0, ... is not computable, but is not algorithmically random since an initial segment of length n only contains n/2 bits of information.   Algorithmically random sequences are upredicatable in the sense that no program can predit the bits of the sequence with better than 50% accuracy.
  • asked a question related to Complexity
Question
3 answers
  1. If different different uml class diagram and the size metrics and complexity?Which type of conclusion we will reached?
Relevant answer
Answer
You can use "use-case points" or "class points" as a size metric in object-oriented projects. You can estimate the size of the project with the use of these metrics. 
  • asked a question related to Complexity
Question
1 answer
As, to determine the kf, we can calculate it from "half-wave" potential, but in this case we have peak potential (Epa) because the reaction is irreversible. Can we assume the Epa as half-wave potential ???
Relevant answer
Answer
Its a very conductive environment and so typical views of antenna design and propagation aren't as important. Omega had a wavelength of 23km. So many situations would only be a few wavelengths away. So it's almost a near field analysis but we are more interested in skin depth than most other things (a lot more than antenna tuning). DB
  • asked a question related to Complexity
Question
7 answers
I want to cleave tertiary amine and borane complex
Relevant answer
Answer
Thank you Ercolani and Talaat. I'm trying the same and once I get the result I'll let you know the strongnes of the borane-amine bond
  • asked a question related to Complexity
Question
5 answers
benzoyl thiourea ligands +Cu(II) perchlorate ------ Cu(I) complex
Relevant answer
Answer
Moreover, now I can add, that reduction of copper(II) acetate with Ph3P can give you copper(I) complex of composition [Cu(ac)(Ph3P)3} or mixed-valence copper(I,II)  of composition [Cu4(ac)6(Ph3P)4] (Valigura et al., J. Chem. Soc., Dalton Trans., 1986(11), 2339-2344.) The Cu(II) structure mentioned in previous part was published two years later by Koman et al.,  Acta Crystallogr., C44(4), 601-603 (1988). 
  • asked a question related to Complexity
Question
3 answers
How was Bactrocera invadens discriminated from its sister taxa in the B dorsalis complex? Was it properly described or done?
Relevant answer
Answer
In a recent paper by Schutze et al. (Systematic Entomology, 2014, DOI: 10.1111/syen.12114), Bactrocera invadens was synonymized under Bactrocera dorsalis.
  • asked a question related to Complexity
Question
3 answers
-I'm trying to synthesis organometalic complexes between [1,2 bis( diphenylphosphino) ethylene] with (Mo,Cr,W) .please can any one help me to know how these complexes can be prepared using nanomethods ,and you may recomend me some paper in this field!
Relevant answer
Answer
I expect that starting with the hexacarbonyl (WCO6, MoCO6) you could react with the olefin. PK Baker wrote several papers describing the formation of W(CO)3(NCMe)3 (72 h reflux in acetonitrile) then react at ice temperature with iodine to obtain the 7 coordinate WI2(CO)3(NCMe)2 complex. MM Meeham and Baker then reported reacting this complex with various olefins, including the one you described to obtain a complex similar to your target. Baker was my original PhD supervisor and wrote many papers in this area.
  • asked a question related to Complexity
Question
1 answer
I am going to do a research about Aphelinus, but I can't find keys of America. I have keys about the complex and some countries like India, Egypt and Israel. 
  • asked a question related to Complexity
Question
5 answers
Would it be a combination of more than one machine or just one?
Which would be most efficient?
Relevant answer
Answer
Your question reduces to an old issue in computability theory by the simple realization that all possible arrangements of an object can be reduced to 6 numbers, 3 for coordinate position, and 3 for angular orientation, thus the arrangement of objects reduces to the problem of generating random numbers, which is well studied.  See https://en.wikipedia.org/wiki/Pseudorandomness for an overview.
There is no distinction in computability theory between one machine with two (or more) parts, and two machines.
Pseudorandom is more useful for testing and comparing various theories or systems, since it is statistically random, but actually repeatable.  A pseudorandom generator with an external uncontrolled seed, such as timing of keystrokes, may be what you are thinking of with "two machines."  Making quantum measurements is thought to be truly random, and now days is not too hard to do. 
But the real issue is, what are you going to use it for?  In economics, usually a sequence of numbers is not independent.  Many theorems assume independence of samples, but even such a simple thing as investment returns violates this assumption.  Next year's returns, assuming re-investment, depend to an extent on this year's returns. 
  • asked a question related to Complexity
Question
6 answers
Is it true that any monic polynomial of degree 6 with complex coefficients can be represented as a sum of at most three summands each being the cube of at most quadratic polynomials? 
B.Shapiro/Stockholm/
Relevant answer
Answer
 Dear Professor Breuer,
I am a professional and know well what kind of questions one should put to a broad audience to stimulate people's interest. The question I posed is  elementary and unsolved and does not have a long history behind it. It might have an elementary answer (which I could not find). If somebody has any reasonable suggestion  about this case, I can easily generalize it to a small theory.
As for the style of questions, my former advisor Professor Vladimir Arnold (the famous one) once said that there is a Russian style of formulating mathematical questions and a French style. The Russian style (which I try to follow) is to formulate a mathematical problem in its simpllest unknown case. The French style is to pose a problem in its maximal generality.
Warmest regards, B.Shapiro
  • asked a question related to Complexity
Question
7 answers
Say we have a complex network made of n sub-networks and m nodes. Some of the sub-networks share some of the m nodes. Say that such complex network (aka Interdependent Network) is under attack, and say that this attack is not targeted (e.g., does not look for high degree nodes only) nor random, but spatial (both low degree and high degree nodes are being removed). Now, say that the failure cause is external, in addition to being spatial, and that it can feature many levels of spatial extent. Hence, the higher the level, the higher the number of nodes involved, the higher the disruption (theoretically). My problem relates to the failure threshold qc (the minimum size of the disrupting event that is capable of producing 0 active nodes after the attack).
My question: does the failure threshold qc depend on how nodes are connected only (e.g., the qc is an intrinsic feature of the network)? Or is it a function of how vast the spatial attack is? Or does it depend on both?
Thank you very much to all of you.
Francesco
Relevant answer
Answer
Yes it's likely to depend on the structure of the network, as it does for a simple network. The situation you describe is quite specific, so I'm not sure the answer is known. The best way is to try to find out!
You might find this review helpful, for some basic ideas http://arxiv.org/abs/0705.0010
  • asked a question related to Complexity
Question
29 answers
One of the interesting classes  is class of quotients of complex polynomials. Is this class dense in C(S^2, S^2) in compact open topology?
Relevant answer
Answer
Rogier Brussee  is right:
the meromorphic functions are not dense in the continuous functions C^0(S^2, S^2),. This is because the degree of a meromorphic function (i.e a holomorphic map CP^1 to CP^1) is non negative. But as I said before  the set of rational functions  R(S^2, S^2)  on $S^2$ is closed  with respect to spherical metric.  Of course there is  a continuous function in  C^0(S^2, S^2)  which is not rational and there fThe meromorphic functions are not dense in the continuous functions . Perhaps  an iteresting question is:
whether  every homeomorphisam can be aproximated by quasiconformal mappings on $S^2$.
  • asked a question related to Complexity
Question
2 answers
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required.  It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
Relevant answer
Answer
Anthony, there are other posts that I've made where it is determined that the sequence is a de Bruijn sequence.  What is most important about my work is the speed with which such a sequence can be generated; for 100 million digit de Bruijn sequence, my software produces same in less than 30 minutes.  Also, while random is not computable, it is also true that random is biased from maximal disorder.  Thanks for the reply.
  • asked a question related to Complexity
Question
1 answer
Has anyone tried geometry optimization in Unigraphics NX? If yes could share the complexity you have tried and has it fetched you any good results?
Thanks for your time.
Relevant answer
Answer
I am interested to your question (similar to my case).and will explain in detail soon. Bit hurry for this time, please consult me.
  • asked a question related to Complexity
Question
4 answers
I am using A* algorithm in my research work. Please suggest some  research paper or article which prove the A* algorithm complexity.
Relevant answer
Answer
 With many heuristics, because of their nature, sometimes is better to calculate "the complexity" based on the number of iterations (or the number of objective function calculations) that the algorithm require. The algorithm complexity analysis for heuristics is kind of tricky and should be associate with other metrics.
  • asked a question related to Complexity
Question
4 answers
Hi all
I know this replacement is because of the complexity in spiking neurons computations and because many supervised learning algorithms use gradient based methods, then its difficult to use such a complex models for neurons. Here I have two questions:
1) If we use a simple model (like Izhikevich model), then do we have to use such substitution too?
2) Is this replacement just for supervised learning algorithms? or in unsupervisedis it also necessary? Considering in unsupervised there is no gradient and back-propagation (If I think right!!!)
please help me.
Relevant answer
Answer
thank you
  • asked a question related to Complexity
Question
14 answers
Anybody with a good open-source remeshing software? The Meshlab recipe is eminently complex and doesn't work very well. Magics remeshing functionality is disabled for now. Hypermesh is not open source. Looking for alternatives...
Relevant answer
Answer
You might be interested in gmsh or netgen.
  • asked a question related to Complexity
Question
2 answers
Hello,
Does anyone knows what does w[1]-hard means in the context of parametrized complexity?
Relevant answer
Answer
Thank you for your kind reply.
  • asked a question related to Complexity
Question
7 answers
I am trying to inhibit the APC in 3rd instar larval Neuroblasts, but conventional APC inhibition with MG132 does not seem to work... Any suggestions would be highly appreciated.
Relevant answer
Answer
The best is to use mutants or RNAi (vddc lines) against APC components (cdc20 or APC subunits). it worked cell in our hands (strong metaphase arrest).
  • asked a question related to Complexity
Question
6 answers
How can I remove unreacted 1-5 diphenyl carbazide? Would  it also dissolve in most solvents?.
Relevant answer
Answer
Thanks you are opinion
  • asked a question related to Complexity
Question
11 answers
If right hand limit and left hand limit exist, that means the limit 'exists'.
But, this is wrong.... Why??
Even in complex case, lim (z->0), (x^2y)/(x^4+y^2) does not exist... coz depending on the path, the values of limit changes.
Why does that mean the limit does not exist?
Relevant answer
Answer
"For every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < | x − p | < δ implies | f(x) − L | < ε. "
To expand on this in a way is how I think more texts should immediately start after introducing he above is the "limit definition", lets examine this idea of having a different right-hand and left-hand limit.
Suppose that as x approaches a our function approaches R, the right hand limit, AND IT ALSO approaches L, the left-hand limit as x approaches a from the other direction. Using the limit definition, that means that for the right-hand limit R there must exist a δ1 such that for all x, if  0<|x-a|<δ1, then |f(x)-R| < ε. In other words, for R to be a limit, it has to satisfy the definition.
Likewise, there must be a δ2 such that for all x, if  0<|x-a|<δ2 then |f(x)-L| < ε.
However, the limit definition demands we have a δ such that this:
(1) for all x, if  0<|x-a|<δ, then  |f(x)-R| < ε AND  |f(x)-L| < ε
or that both the right-hand limit R and the left-hand limit L satisfy the limit definition. So we take the minimum (δ12) and let δ equal that minimum. Now we can see if it is possible for (1) to be true. We've got our δ, so now we look for a good ε. Since we are asserting that (1) holds true for two different limit values R & L, we can set ε = |R-L| /2 or the difference between the two limit values dived by 2 (as the difference must be positive, because R is not equal to L). so now we have for all x
(2) if  0<|x-a|<δ, then  |f(x)-R| < |R-L| /2
AND
 |f(x)-R| < |R-L| /2
If there can be a right-hand and left-hand limit that have different values but the limit exists, then (2) should be true, as the limit definition requires that for ALL ε>0, we find a δ>0 as in (2). But (2) CAN'T be true, for if  0<|x-a|<δ then
(3)  |R-L|= | R- f(x) + f(x) - L | <= | R- f(x)| + |f(x) - L | < |R-L| /2 + |R-L| /2 = |R -L|
It's kind of hard to see given the make-shift math symbols and font, but the above says that |R-L| is less than something it is equal to, a blatant contradiction. Therefore, it cannot be that there exists any two limits R & L unless R=L.
I am generally of the opinion that speaking of right-hand or left-hand limits isn't very helpful to understanding limits but can quite easily make things more difficult, as the moment one is talking about limits in higher dimensions there's not really any analogue, just ways in which thinking about a function approaching from only two directions can lead you astray. Generally, calculus classes rely on a lot of pre-calculus mathematics so that the more difficult calculus concepts can handled by learning a bunch of rules. The limit definition perfectly supplied by Weiguo Xie can be quite conceptually challenging, so often enough little time is spent using it and that little time involves doing practice problems that don't really help you understand what limits are (most of the "big name" calculus textbooks have few if any problem sets that include ε-δ proofs, one of the few kinds of problems that can really help one understand the nuances of limits). However, all of calculus and analysis are built upon them.
  • asked a question related to Complexity
Question
2 answers
How to get stable Sr-Xylenol Orange complex.
Regards,
Rohit Dave
Relevant answer
Answer
Thanks Dr. Ravi
  • asked a question related to Complexity
Question
21 answers
I ask this because one of the most argued complex in dinoflagellate taxonomy i.e. Alexandrium tamarense complex is composed of three Alexandrium morphospecies i.e. A. tamarense, A. catenella and A. fudyense. This complex, however, forms 5 distinct groups limited in certain geographic locations, each containing multiple morphospecies. 
Relevant answer
Answer
According to the ICZN, species can be named using whichever character evidence you want, as long as they are diagnosable and published in a correct manner. Species that I have seen described based only on molecular evidence usually accomplish the "diagnosable" part by having a short diagnosis section listing the actual bases which are present at the species-diagnostic sites in one or more gene sequences.
  • asked a question related to Complexity
Question
45 answers
I always suspected we could (though I've studied them in pure math a lot), but what about using them extensively in quantum mechanics?
Relevant answer
Answer
Perhaps the quintessential component of quantum mechanics, at least with respect to what makes it qualitatively different from classical mechanics and indeed classical conceptions of physics (and which is behind much of what makes up the "weirdness" of quantum mechanics) is how probability is calculated. Being someone who has studied mathematics, you are doubtless aware that probability is found everywhere in the sciences and in other disciplines, not to mention the role it plays in philosophy, interpretations of causality, epistemology, etc. However, nobody calculates probabilities the way required in quantum mechanics (i.e., calculating the probability amplitude, a complex number, and taking its mod square). We cannot do this using the structure of any real-valued space, but rather we require the structure provided by the complex plane (and, obviously, Hilbert space). The motivation for this method of computing all relevant probabilities wasn't mathematical even in the sense that numerous aspects of the standard model were (i.e., once QM was sufficiently developed it was extended to incorporate classical electrodynamics and other fields of physics, such as field theory, but mostly via mathematics not empirical study). The easiest way to see this (IMO) is via the classic double-slit experiment. Classical physics tells us that in order to calculate the probability that e.g., an electron went through slit 1 or slit 2 is obtained using probability. Simple, easy, straightforward, and wrong. Quantum mechanics gives us the right results providing that we calculate this probability but with the additions that result from computing the mod square of a complex number (namely, the probability that the electron traveled through slit 1multiplied by its complex conjugate and the probability that it went through 2 times the complex conjugate.
Nobody knows why that's the rule, and nobody wanted it to be the rule. So if you can formulate a mechanics that reproduces the successes without reliance on complex numbers, I'll be in the first row when you receive your Nobel prize and the one desperately trying to get your attention for the after-party I'm sure you'll throw in order to thank you. However, I rather suspect that the only way one can do away with complex numbers and retain modern physics is by introducing mathematical spaces, structures, and/or operations that make quantum physics more complex than if when we use complex numbers.
  • asked a question related to Complexity
Question
6 answers
I am having some difficulties to find in a literature some establish time complexity for the ILS and TSP. Does anybody can help, please.
Relevant answer
Answer
Hi Patricia
Actually,  the best implementation of Lin-Kernigan is probably by K. Helsgaun. This was the algorithm used by the Concorde people to solve the largest problem instance in TSP (84900 cities). You can get articles about it and download it from:
Best Regards Thomas
  • asked a question related to Complexity
Question
4 answers
Apart from APRIORI algorithm.
Relevant answer
Answer
hypothesis free data mining? or visualizing complex data to support experts in Interpretation?
for the second look at
  • asked a question related to Complexity
Question
7 answers
Tuberous Sclerosis Complex (TSC) was discovered in the 1880's by a French physician named Bourneville. TSC is an incurable multisystem genetic disorder which can have a wide range of effects. Approximately one in 8,000 adults and one in 6,000 newborns are affected by TSC.
Relevant answer
Answer
I agree with all previous comments. In spite of the fact that angiofibromas are only a minor problem in TSC, they are visible and when patients ask for some help we can remove. Patients are happy even they know that could recur. I destroyed them under local anesthesia by electrocoagulation or Erbium-CO2 laser. In fact, I agree with Dr. Adil H.H. Bashir and I do prefer EC because the procedure is shorter and has best hemostatic qualities.  
  • asked a question related to Complexity
Question
4 answers
I was wondering which method you would recommend me to define the time complexity of human-readable algorithms obtained from a hyper-heuristic framework. 
Many thanks for your help in advance.
Relevant answer
Answer
Is there any particular difference between those types of algorithms and what one would consider for any kind of algorithm?
 Usually time-complexity is a formal construct that we use for any kind of algorithm that depends on the asymptotic number of steps (then considering worst-case, average-case, or best-case analysis).  The only thing we require (and many would define an algorithm to have this property) is that it terminates in a finite number of steps.  If it applies to what I suggest, there is a strong body of literature in theoretical computer science, in particular, analysis of algorithms involving this (these would be your O(blah)'s and more).  Any reasonable theoretical computer science text will do for this (e.g., Introduction to Algorithms).  My better question is why don't these standard techniques apply?  I understand each of the terms you provided, but don't quite follow why you wouldn't use the standard approach for algorithm analysis, depending on your model of computation.  The model can change, but the definitions need not change unless your need to consider anything in particular (e.g., if it is a parallel machine model, you may consider the time-complexity of the whole algorithm based on the longest running thread/processing unit).
If the whole algorithm doesn't terminate in finite time, it makes analysis a little less concrete, but you can also do the following:
1) Determine the time complexity of an iteration of the algorithm.  If other algorithms do this, it can be a useful measure.
2)  Average-case may be more useful.  Consider the distribution of how the instances behave, then go right ahead.
I hope this helps :)!
  • asked a question related to Complexity
Question
7 answers
WSNs are low complex and powered Device so any work has been done in providing security without using Cryptography/complex mechanisms ?
Relevant answer