Science topic

Complexity - Science topic

Explore the latest questions and answers in Complexity, and find Complexity experts.
Questions related to Complexity
  • asked a question related to Complexity
Question
3 answers
I'm thinking of evolution and if there is some end point?
Relevant answer
Answer
Apologies for the confused question. I was equating organisation with complexity.
  • asked a question related to Complexity
Question
61 answers
COMPLEXITY IN SCIENCE, PHILOSOPHY, AND CONSCIOUSNESS:
DIFFERENCES AND IMPORTANCE
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Introduction
With an introductory apology for repeating a few definitions in various arguments here below and justifying the same as necessary for clarity, I begin to differentiate between the foundations of the concept of complexity in the physical sciences and in philosophy. I reach the conclusion as to what in the concept of complexity is problematic, because the complexity in physical and biological processes may not be differentiable in terms of complexity alone.
Thereafter I build a concept much different from complexity for application in the development of brains, minds, consciousness etc. I find it a fine way of saving causation, freedom, the development of the mental, and perhaps even the essential aspects of the human and religious dimension in minds.
Concepts of complexity considered in the sciences are usually taken in general as a matter of our inability to achieve measuremental differentiation between certain layers of measurementally integrated events within a process or set of processes and the same sort of measurementally integrated activities within another process or set of processes.
But here there is an epistemological defect: We do not get every physical event and every aspect of one physical event to measure. We have just a layer of the object’s total events for us to attempt to measure. This is almost always forgotten by any scientist doing complexity science. One tends to generalize the results for the case of the whole object! Complexity in the sciences is not at all a concept exactly of measurement of complexity in one whole physically existent process within itself or a set of processes within themselves.
First, what is termed as complexity in an entity is only the measure of our inability to achieve measurements of that part of a layer of process which has been measured or attempted to be measured. Secondly, always there is a measuremental comparison in the sciences in order to fix the measure of complexity in the aspects that are measured or attempted to measure. This is evidently a wrong sort of concept.
The essential difference here must be sharpened further. As a result of what is said above, the following seems more appropriate. Instead of being a measure of the complexities of one or a set of processes, complexity in science is a concept of the difference between (1) our achieved abilities and inabilities to achieve the measurement of actual complexity of certain levels of one physical process or a set of processes and (2) other types of levels of the extent of our ability and inability to measurement within another process or set of processes. This is strange with respect to the claims being made of complexity of whichever physical process a science considers to measure the complexity.
If a scientist had a genuine measurement of complexity, one would not have called it complexity. We have no knowledge of a higher or highest complexity to compare a less intense complexity with. In all cases of complexity science, what we have are just comparisons with either more or less intense complexities. This makes the concept of complexity very complex to deal with.
2. Is Complexity Really Irreducible?
On a neutral note, each existent physical process should possess great complexity. How much? We do not know exactly; but we know exactly that it is neither infinite nor zero. This truth is the Wisdom of complexity. Let us call it complexity philosophy. This philosophical concept of complexity within the thing itself (CI) is different from the methodologically measurement-based concept of complexity (CM) in the sciences. In CM, only the measured and measurable parts of complexity are taken into consideration and the rest of the aspects and parts of the existent physical process under consideration are forgotten.
If this were not true, the one who proposes this is bound to prove that all the aspects and parts of the physical process or at least of the little layer of it under measurement are already under any one or more or all measurementally empirical procedures with respect to or in terms of that layer of the process.
To explain the same differently, the grade of complexity in the sciences is the name of the difference (i.e., in terms of ‘more’ or ‘less’) between the grades of difficulty and ease of measuring a specific layer of causal activity within one process and a comparable or non-comparable layer of causal activity in another.
Both must be measured in terms of the phenomena received from them and the data created of them. Naturally, these have been found to be too complex to measure well enough, because we do not directly measure, but instead measure in terms of scales based on other more basic scales, phenomena, and data. But the measure-elements titled infinite-finite-zero are slightly more liberated of the directly empirically bound notions. I anticipate some arguing that even these are empirically bound. I am fully agreed. The standpoint from which I called the former as formed out of directly empirically bound notions is different, that is all.
Both the above (the grades of difficulty and ease of measuring a specific layer of causal activity within one process and a comparable or non-comparable layer of causal activity in another) must be measured in terms of certain modes of physical phenomena and certain scales set for these purposes. But this is not the case about the scale of infinity-finitude-zero, out of which we can eternally choose finitude for the measure of ease and difficulty of measuring a specific layer of causal activity without reference to any other.
The measure-difference between the causal activities is not the complexity, nor is it available to be termed so. Instead, complexity is the difference between (1) the ease and difficulty of measuring the one from within the phenomena issuing from certain layers of the physical process and the data created by us out of the phenomena, and (2) the ease and difficulties of measuring the same in the other.
In any case, this measure-difference of ease and difficulty with respect to the respective layers of the processes can naturally be only of certain layers of activity within the processes, and not of all the layers and kinds of activity in them both. Evidently, in the absence of scale-based comparison, their complexity cannot be termed a high or a low complexity considered within itself. Each such must be compared with at least another such measurementally determined layer/s of process in another system.
3. Extent of Complexity outside and within Complexity
The question arises now as to whether any process under complexity inquiry has other layers of activity arising from within themselves and from within the layers themselves from which directly the phenomena have issued and have generated the data within the bodily, conscious, and cognitive system of the subjects and their instruments.
Here the only possible answer is that there is an infinite number of such layers in any finite-content physical processual entity, and within any layer of a process we can find infinite other sub-layers, and between the layers and sub-layers there are finite causal connections, because every existent has parts that are in Extension and Change.
The infinite number of such complexity layers are each arrangeable in a scale of decremental content-strength in such a way that no finite-content process computes up to infinite content-strength. This does not mean that there are no actual differences between any two processes in the complexity of their layers of activity, or in the total activity in each of them.
Again, what I attempt to suggest here is that the measured complexity of anything or of any layer of anything is just a scale-based comparison of the extent of our capacity to discover all the complexity within one process or layer of process, as compared to the same in another process or layer of process.
4. Possible Generalizations of Complexity
Any generalization of processes in themselves concerning their complexity proper (i.e., the extent of our capacity to discover all the complexity within one process or one layer of activities of a process) must now be concluded to be in possession of only the quantitative qualities that never consist of a specific or fixed scale-based number, because the comparison is on a range-scale of ‘more than’ and ‘less than’.
This generalization is what we may at the most be able to identify regarding the complexity within any specific process without any measuremental comparison with another or many others. Non-measuremental comparison is therefore easier and truer in the general sense; and measuremental comparison is more applicable in cases of technical and technological achievements.
The latter need not be truer than the former, if we accept that what is truer must be more general than specific. Even what is said merely of one processual object must somehow be applicable to anything that is of the same nature as the specific processual object. Otherwise, it cannot be a generalizable truth. For this reason, the former seems to be truer than the latter.
Now there are only three possibilities for the said sort of more general truth on comparative complexity: accepting the infinite-finite-zero values as the only well-decidable values. I have called them the Maximal-Medial-Minimal (MMM) values in my work of 2018, namely, Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology.
Seen from this viewpoint, everything physically existent has great processual-structural complexity, and this is neither infinite nor zero, but merely finite – and impossible to calculate exactly or even at any satisfactory exactitude within a pre-set scale, because (1) the layers of a process that we attempt to compute is but a mere portion of the process as such, (2) each part of each layer has an infinite number of near-infinitesimal parts, and (3) we are not in a position to get at much depths and breadths into all of these at any time.
Hence, the two rationally insufficient conclusions are:
(1) The narrowly empirical-phenomenologically measuremental, thus empirically partially objective, and simultaneously empirically sufficiently subjective amount of complexity (i.e., the extent of our capacity and incapacity to discover all the complexity) in any process by use of a scale-level comparison of two or more processes.
(2) The complexity of entities without having to speak about their existence in every part in Extension-Change and the consequently evident Universal Causality.
These are the empirically highly insulated, physical-ontologically insufficiently realistic sort of concept of complexity that the sciences entertain and can entertain. Note that this does not contradict or decry technological successes by use of scientific truths. But claiming them to be higher truths on complexity than philosophical truths is unjustifiable.
Now the following question is clearly answerable. What is meant by the amount of complexity that any existent physical process can have in itself? The only possible answer would be that of MMM, i.e., that the complexity within any specific thing is not a comparative affair within the world, but only determinable by comparing the complexity in physical processes with that in the infinitely active and infinitely stable Entity (if it exists) and the lack of complexity in the zero-activity and zero-stability sort of pure vacuum. It can also be made based on a pre-set or conventionalized arithmetic scale, but such cannot give the highest possible truth probability, even if it is called “scientific”.
MMM is the most realistic generalization beyond the various limit possibilities of scale-controlled quantities of our incapacity to determine the amount of complexity in any layer of processes, and without incurring exact numbers, qualifications, etc. The moment a clear measuremental comparison and pinning up the quantity is settled for, it becomes a mere scientific statement without the generality that the MMM realism offers.
Nonetheless, measuremental studies have their relevance in respect of their effects in specific technological and technical circumstances. But it must be remembered that the application of such notions is not directly onto the whole reality of the object set/s or to Reality-in-total, but instead, only to certain layers of the object set/s. Truths at that level do not have long life, as is clear from the history of the sciences and the philosophies that have constantly attempted to limit philosophy with the methods of the sciences.
5. Defining Complexity Closely
Consider any existent process in the cosmos. It is in a state of finite activity. Every part of a finite-content process has activity in every one of its near-infinitesimal parts. This state of having activity within is complexity. In general, this is the concept of complexity. It is not merely the extent of our inability to measure the complexity in anything in an empirical manner.
Every process taken in itself has a finite number of smaller, finite, parts. The parts spoken of here are completely processual. Nothing remains in existence if a part of it is without Extension or without Change. An existent part with finite Extension and Change together is a unit process when the cause part and the effect part are considered as the aspects or parts of the part in question.
Every part of a part has parts making every part capable of being a unit process and in possession of inner movements of extended parts, all of which are in process. This is what I term complexity. Everything in the cosmos is complex. We cannot determine the level of complexity beyond the generalized claim that complexity is normally limited within infinite or finite or zero, and that physical and biological processes in the cosmos come within the finitude-limit.
Hereby is suggested also the necessity of combining the philosophical truth about complexity and the scientific concept of the same for augmentation of theoretical and empirical-scientific achievements in the future. While determining scientifically the various natures and qualities of complexity, chaos, threshold states, etc. in a manner not connected to the philosophical concept of it based on the MMM method of commitment access to values of content and their major pertinents, then, scientific research will remain at an elementary level – although the present theoretical, experimental, and technological successes may have been unimaginably grand. Empirical advancement must be based on the theoretical.
Constant effort to differentiate anything from anything else strongly, by making differentiations between two or more processes and the procedures around them, is very much part of scientific research. In the procedural thrust and stress related to these, the science of complexity (and all other sciences, sub-sciences, etc.) suffer from the lack of ontological commitment to the existence of the processes in Extension-Change and Universal Causality.
The merely scientific attitude is due to a stark deficit of the most general and deepest possible Categories that can pertain to them, especially to Extension-Change and Universal Causality. Without these, the scientist will tend to work with isolated and specifically determined causal processes and identify the rest as non-causal, statistically causal, or a-causal!
6. Complexity in Consciousness
The above discussion shows that the common concept of complexity is not the foundation on which biological evolution, growth of consciousness, etc. can directly be based. I have plans to suggest a new concept.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
Answer
  • asked a question related to Complexity
Question
38 answers
This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.
Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.
Experiments, simulations, and theoretical results are equally important.
Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.
Relevant answer
Answer
Feasibility study for estimating optimal substrate parameters for sustainable green roof in Sri Lanka
Shuraik A. Kader, Velibor Spalevic & Branislav and Zdenka Dudic
Environment Development and Sustainability 2022(4):1-27
DOI: 10.1007/s10668-022-02837-y
Abstract:
In twenty-first century buildings, green roof systems are envisioned as great solution for improving Environmental sustainability in urban ecosystems and it helps to mitigate various health hazards for humans due to climatic pollution. This study determines the feasibility of using five domestic organic wastes, including sawdust, wood bark, biochar, coir, and compost, as sustainable substrates for green roofs as compared to classical Sri Lankan base medium (fertiliser + potting mix) in terms of physicochemical and biological parameters associated with growing mediums. Comprehensive methodologies were devised to determine the thermal conductivity and electric conductivity of growing mediums. According to preliminary experimental results, the most suitable composition for green roof substrates comprised 60% organic waste and 40% base medium. Sawdust growing medium exhibited the highest moisture content and minimum density magnitudes. Biochar substrate was the best performing medium with the highest drought resistance and vegetation growth. The wood bark substrate had the highest thermal resistance. Growing mediums based on compost , sawdust, and coir produced the best results in terms of nitrate, phosphate, pH, and electric conductivity (EC) existence. This study provided a standard set of comprehensive comparison methodologies utilising physicochemical and biological properties required for substrate characterization. The findings of this research work have strong potential in the future to be used in selecting the most suitable lightweight growing medium for a green roof based on stakeholder requirements.
###
This research can save us a lot of energy consumption in housing, governing, education, and industrial areas. What is your opinion about it?
  • asked a question related to Complexity
Question
4 answers
It would be nice to get reading succestions addressing the topic why modelling societies and their dynamic developments may fail. What are the challenges and pitfalls when one attempts to create models that aim to forecast the future developments. Economic literature, system dynamic approaches, predictive social science may address these issues with modelling. I'm looking for good entry points to these discussion.
Relevant answer
Answer
This model shown below is certainly not a failure to represent our social system. However, it should be better understood when you find that it is not based on actual people but on what they do. These operational expressions or agencies (which I also call entities in my book) are for the aggregate of similar and related activities and it is due to their being commonly shared. There are only 6 of these entities needed to represent all of the macroeconomic activities within a country not including foreign trade. Ten kinds of transactions are divided between then using 19 kinds of flows or exchanges of goods and services, etc., for money.
  • asked a question related to Complexity
Question
5 answers
I reviewed many articles about the neurodegeneration of the human brain. Many researchers pointed out that complexity is a key feature in detecting alterations in the brain. However, the characteristic changes that occurred in the electrophysiological signals may depend on different factors that are shared by the subjects in the same group ( patient or control). Researchers apply some statistical tests to show obtained results are significant but I'm not sure whether the results are really related to the proposed research hypothesis ( Complexity changes with disease). We do not know that the subjects in one group drink coffee regularly while other group members do not. There are many possibilities like this for the people who participated in these experiments.
Now, this is my question;
What methodology can be utilized to ensure our hypothesis is REALLY true or not true at the end of the study? Do you have any suggestions to overcome this specific problem?
Thanks in advance.
Relevant answer
Answer
You touched a very tough subject that is penetrating the whole biology and medicine: the hypothesis creation and testing. Simply said, a combination of Complex Systems measures and AI together with machine is a very useful tool that can help to address similar problems to yours.
When you are really interested about complexity measures, which are employing entropy, their application to ECG (EEG is not much different), and subsequent creating and testing of Hypotheses using AI and ML, you can read the methodical paper on prediction of heart arrhythmia. The paper explains all from the first principles and have a rich list of related literature.
After reading it, when you find it useful, it would be possible to continue in our discussion, as the subject is really uneasy to grasp without thinking through it all.
Just one remark, any habits, comorbidities, and other health related features can be easily incorporated into AI & ML methods. I recommend reading papers on gut microbiota health/dysbiosis, which I shared in the project "Complexity in Medicine…" The research is linking gut microbiota dysbiosis to neurodegenerative diseases!
  • asked a question related to Complexity
Question
12 answers
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
Thanks in advance.
Relevant answer
Answer
You have to explain time and space complexity of your algorithm
  • asked a question related to Complexity
Question
9 answers
I have two type of resources A and B. The resources are to be distributed (discretely) over k nodes.
the number of resources A is a
the number of resources B is b
resources B should be completely distributed (sum of resources taken by nodes should be b)
resources A may not be completely distributed over the nodes. In fact, we want to reduce the usage of resources A.
Given resources (A or B) to a node enhance the quality of the nodes, where the relation is non-linear.
All nodes should achieve a minimum quality.
What is the type of the problem and how I can find the optimal value?
Relevant answer
Answer
Genetic algorithms never find an optimum. I hope you will use better tools than those. You should know that this forum has basically been hi-jacked by meta-heuristics enthusiasts - an inflated array of posts at RG are based on the fact that RG "scholars" do not know that there are better tools. So beware!
  • asked a question related to Complexity
Question
5 answers
Problem description:
In socio technical systems an idea of technological initiative can emerge and different groups can be organizing around it. Each groups little by litle are organizing sponaneously based on common interest, shared values including ethics, around of an idea of progress and potential benefits that sometimes is vague.
Sometimes those groups start to interact each other and at certain point of development a macro context start to be needed in order to reach the necessities of the society.
Lately despite of the potential social benefits of the new technological initiative, the political body do not create the institutional conditions for the development of a new regulation and public policy (this is what I call the macrosystem). So the socio-technological initiative do not thrive.
Some of the hypothesis about why this issue is happening are:
1) Politicians do not take care or interest of the posibilities of the new technology and initiative.
2) Politicians sees the new technology as a loss of self power threat.
3) Politicians want to take control of the different technical groups resources and assets but not the values and real purpose, because they want to have more power for themselves.
4)...
In consecuence the work done by different technical groups will never be enough organized and coordinated as well as is required by a common purpose that reach societal necesities.
What I want to do is describe the problem in terms of the interaction of technological working groups (the system) and the political and policy level (the macrosystem)
Do yo know if there are a systemic theoretical framework that can help me to analyse and describe this problem and dynamic?
Relevant answer
Answer
It is recommended to follow one quite useful part of complex systems modelling methodology: agent based modelling. There might be different types of agents, some in a few copies like politicians, other in larger numbers like society cliques. There might be defined engagement rules among different types of agents.
Dirk Helping is definitely having research on societal interactions. The seminal book on agent based modelling is by Illachinski: Artificial War. There is explained how to program agents to achieve their desired behavior.
Entropy measures can be used to classify the mode of behavior of the system. Each system undergoing a phase transition express it in changes of entropy even when globally there is nothing substantial to observe!
  • asked a question related to Complexity
Question
12 answers
If multiple deep learning (DL) algorithms are merged together to create models then the system will be complex.
To analyze this, how to calculate complexity?
Is there any formal way or mathematical proof to analyze this kind of complexity of DL models?
Thanks in advance.
  • asked a question related to Complexity
Question
4 answers
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
Relevant answer
Answer
Could you just say: my plate has plain plasta?
  • asked a question related to Complexity
Question
30 answers
Antigens are processed by specialized macrophages to produce complex protein-RNA complexes that eventually produce iRNA. When this iRNA is introduced to primary B cells that have never seen the antigen, they produce specific antibodies to that antigen. Th macrophage produced RNA is incorporated into the the genome of these B cells by reverse transcriptase that now become "memory cells" capable of a secondary response when confronted with the antigen they had never come into contact before. Reverse ttranslation in the macrophage is the best explanation for the production of such specific iRNA.
Relevant answer
Answer
Yunus Shukor Thanks.
Indeed between a choreographed dance of hypermutations and recombination, gene splicing and translocation of gene segments to generate an enormous variety of VxDxJxC expressions for an antibody or the existence of a reverse translatase that would process the exposed epitope of an antigen and encode an RNA segment for the hyper variable segment of an antibody, nature and Occam‘s Razor I feel would pick the latter.
Thanks you M. Y. Shukor.
  • asked a question related to Complexity
Question
10 answers
We all know that poor hygiene within dense populations is a breeding ground for diseases. On the other hand, we know that some kind of medium level of exposure to infectious agents on a daily basis is strengthening immunity and keeps it alert.
The big question is what happens when we start to use sanitizers en masse within whole populations?
Hospitals have special procedures aiming at the rotation of chemically-based sanitizers to avoid the rise and promotion of drug-resistant bacteria and viruses.
How is this problem addressed within whole populations? This seems to be important to know right now during the COVID19 outbreak.
Relevant answer
Answer
Dear Jiri,
you posted an intriguing question. Although preventing drug-resistance is reasonable and scientifically accurate, this very moment is delicate and we have to fully enforce hand sanitizing. Whole world population is now in a real dangerous time; scientific community has to promote public hygiene as hard as possible. In my opinion, sanitizers rotation is a very clever way to manage this emergency, and it should be encouraged.
Good luck with your work,
Dave
  • asked a question related to Complexity
Question
4 answers
How to estimate the Big O complexity of feature selection for filter method based on mutual information (such a, mRMR, JMIM)?
Relevant answer
Answer
Aparna Sathya Murthy Thanks so much for your helpful answer. But counting number of loop is not the basic as this paper (attached snapshot)
  • asked a question related to Complexity
Question
9 answers
I know how to dock the chemical compound with protein but i need server for metal complexes docking 
Relevant answer
Answer
Now i am able to do that with two program MOE and Molegro
  • asked a question related to Complexity
Question
6 answers
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type, byte[] data, int ITERS){ double results[] = new double[ITERS]; DETABECipher cipher = new DETABECipher(); long startTime, endTime; List list = null; for (int i = 0; i < ITERS; i++){ startTime = System.nanoTime(); list = cipher.encrypt(data, secLevel,type, policy); endTime = System.nanoTime(); results[i] = (double)(endTime - startTime)/1000000000.0; } return list; } public List encrypt(byte abyte0[], int i, String s, String s1) { AccessTree accesstree = new AccessTree(s1); if(!accesstree.isValid()) { System.exit(0); } PublicKey publickey = new PublicKey(i, s); if(publickey == null) { System.exit(0); } AESCipher.genSymmetricKey(i); timing[0] = AESCipher.timing[0]; if(AESCipher.key == null) { System.exit(0); } byte abyte1[] = AESCipher.encrypt(abyte0); ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree); timing[1] = AESCipher.timing[1]; timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5]; long l = System.nanoTime(); LinkedList linkedlist = new LinkedList(); linkedlist.add(abyte1); linkedlist.add(AESCipher.iv); linkedlist.add(abeciphertext.toBytes()); linkedlist.add(new Integer(i)); linkedlist.add(s); long l1 = System.nanoTime(); timing[3] = (double)(l1 - l) / 1000000000D; return linkedlist; } public static byte[] encrypt(byte[] paramArrayOfByte) { if (key == null) { return null; } byte[] arrayOfByte = null; try { long l1 = System.nanoTime(); cipher.init(1, skey); arrayOfByte = cipher.doFinal(paramArrayOfByte); long l2 = System.nanoTime(); timing[1] = ((l2 - l1) / 1.0E9D); iv = cipher.getIV(); } catch (Exception localException) { System.out.println("AES MODULE: EXCEPTION"); localException.printStackTrace(); System.out.println("---------------------------"); } return arrayOfByte; } public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[] paramArrayOfByte, AccessTree paramAccessTree) { Pairing localPairing = paramPublicKey.e; Element localElement1 = localPairing.getGT().newElement(); long l1 = System.nanoTime(); localElement1.setFromBytes(paramArrayOfByte); long l2 = System.nanoTime(); timing[3] = ((l2 - l1) / 1.0E9D); l1 = System.nanoTime(); Element localElement2 = localPairing.getZr().newElement().setToRandom(); Element localElement3 = localPairing.getGT().newElement(); localElement3 = paramPublicKey.g_hat_alpha.duplicate(); localElement3.powZn(localElement2); localElement3.mul(localElement1); Element localElement4 = localPairing.getG1().newElement(); localElement4 = paramPublicKey.h.duplicate(); localElement4.powZn(localElement2); l2 = System.nanoTime(); timing[4] = ((l2 - l1) / 1.0E9D); ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree); ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded(); localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey); timing[5] = ShamirDistributionThreaded.timing; return localABECiphertext; } } public ABECiphertext(Element element, Element element1, AccessTree accesstree) { c = element; cp = element1; cipherStructure = new HashMap(); tree = accesstree; } public void execute(AccessTree accesstree, Element element, ABECiphertext abeciphertext, PublicKey publickey) { pairing = publickey.e; ct = abeciphertext; PK = publickey; countDownLatch = new CountDownLatch(accesstree.numAtributes); timing = 0.0D; double d = System.nanoTime(); Thread thread = new Thread(new Distribute(abeciphertext, accesstree.root, element)); thread.start(); try { countDownLatch.await(); long l = System.nanoTime(); timing = ((double)l - d) / 1000000000D; synchronized(mutex) { } } catch(Exception exception) { exception.printStackTrace(); } }
Relevant answer
Answer
That's a hardware issue and nothing else. Best, T.T.
  • asked a question related to Complexity
Question
1 answer
Creative destruction rages on: the bureaucratic, the inflexible, and the slow are disappearing in record numbers. The average lifespan of a Fortune 500 company is less than 20 years, down from 60 years in the 1950s, and is forecast to shrink to 12 years by 2027. If organizational leadership is—at least in part—about identifying what needs to be done, creating vision, setting direction, and guiding people toward that, one should query whether the leadership styles that developed yesteryear (e.g., authentic, authoritarian, democratic, laissez-faire, paternalistic, servant, transactional, transformational, etc.) still contribute to the long-term success of collective effort in more open systems. Given the volatility, uncertainty, complexity, and ambiguity of the environment, understanding how to lead organizations in the Age of Complexity is urgent and important to societies, economies, and governments.
Relevant answer
Answer
Just be dictatorship.
  • asked a question related to Complexity
Question
3 answers
Volatility, uncertainty, complexity, and ambiguity call for different ways of perceiving the world, different approaches to sense and decision making, and different modes and combinations of leadership.
  • Administrative Leadership. Administrative leadership is the managerial approach followed by individuals and groups in formal roles as they plan and coordinate activities in standardized business processes to accomplish organizationally-prescribed outcomes efficiently and effectively.
  • Adaptive Leadership. Adaptive leadership is the informal process that emerges as organizations generate and advance ideas to solve problems and create opportunity; unlike administrative leadership, it is not an act of authority and takes place in informal emergent dynamic among interactive agents.
  • Enabling Leadership. Enabling leadership is the sum of actions to facilitate the flow of creativity (e.g., adaptability, innovation, and learning) from adaptive structures into administrative structures; like adaptive leadership, it can take place at all levels of an organization but its nature will vary by hierarchical level and position. (Uhl-­Bien, Marion, & McKelvey, 2007)
Relevant answer
Answer
Seems that corporate culture must also be considered in order to answer the question. Culture can have a synergistic or countervailing effect on the effectiveness of a given style of leadership.
One way culture could have an effect is by how change or threat is perceived by the organization. Is it seen as an opportunity? Does it trigger post traumatic flashbacks of past events ? Does it open or close the organizational dialog and operations?
I suppose I've added more questions rather than answered yours. But thank you for the provocative question.
Unamuno reportedly said : The truth is not merely the lawful, but rather that which provokes the mind and causes growth.
  • asked a question related to Complexity
Question
3 answers
If We want to compare the performance of two classifiers based on their time measurement criteria, what is the best way ?
Is there any existing methodology ?
Relevant answer
  • asked a question related to Complexity
Question
7 answers
"Mistake 2: Not building flexible career capital that
will be useful in the future" (80000hours.org Career guide). My question: What are the reasons behind this mistake?
--
Background
I'm trying to understand what the reasons that lead people to this mistake are. Maybe after we understand the reasons, we would realize that they're not mistakes after all. Or if we still believe they're mistakes, we'll be much able to solve them.
Here are some general reasons:
1. Trade-offs: Advice often neglects to address the trade-off that comes with this advice: For example, "be flexible" ignores the disadvantages of being flexible and the advantages of being "inflexible" (keeping your eye on the goal, avoiding distractions, persistence etc…) and vice versa with persistence advice like "never give up".
2. Unclear evidence or debatable positions
Often contrary or seemingly contrary positions both have evidence.
Do we underestimate or over-estimate the differences between us and others? The "False Consensus Effect" suggests that we under-estimate while the "Fundamental Attribution Error" can imply that we over-estimate the role of personal differences.
--
So even though the position behind the advice has evidence, it can also be true that the position contrary to the advice has evidence too.
3. Lack of knowledge or effort.
4. More pressing issues
The question then becomes: is : Does the advice that comes with "Not building flexible career capital that will be useful in the future" suffer from general reasons 1+2?
Here are the sub-mistakes of the main mistake:
Some of the reasons that cause people to fall into this mistake (based or influenced by the 80k section though not exactly how they say in all points):
1* Short-term thinking.
2* Not giving career choices enough thought (e.g. English PhD sounds nice so I’m just going to go with it).
3* Underestimating soft-skills: Not Investing in transferable skills that can be used in any job like
A- Deep Work by Calvin Newport. The example given in 80k career guide is writing your daily priorities. I would prefer something like "how to avoid distraction).
B-Learning how to learn
C- Rationality
4* Lack of awareness about automation threat.
5* Inaccurate predictions about one’s future interest/opportunities in the chosen career. (e.g) "The End of History Illusion":
So for example, for 5*, it could be the case that General reason 2 "unclear evidence" is implicated it could be (and I don't know it is) that in contrast to the "End of history Illusion", there is a group of personality theorists who claim that we under-estimate how stable our personality is. Or for 3*, general reason number 1 "trade-offs" is implicated. For example (and again I don’t know), it could be the case that the more you focus on developing general skills like "learning how to learn", you become less competitive in non-transferable technical skills because you have less time to focus on that now.
Relevant answer
Answer
Christopher Steinman
Very good points! I want to add them to the blog post as a comment. Are you ok with that? I'll quote you ofcourse.
  • asked a question related to Complexity
Question
2 answers
Hi,
I'm preparing a PhD proposal to study an egocentric network in primary health care setting in a low-middle income country feed with a name-generator survey. I have 2 questions:
1. Any suggestion to the minimum sampling size to keep the validity?
2. What is the estimated time to do an (egocentric) social network analysis of a sample size of X?
Any suggestions, references?
Many thanks!
Virginia
Relevant answer
Thanks Víctor, will try to check it in the future.
  • asked a question related to Complexity
Question
13 answers
By analysis we mean that we are studying existing algos seeing their features applications, performance analysis, performance measurement, studying their complexity and improving them.
Relevant answer
Answer
time and space consumption during program execution
  • asked a question related to Complexity
Question
1 answer
What would be the generation time of E. coli growing at 37 degree C in complex medium?
It takes 40 minutes for a typical E. coli cell to
completely replicate its chromosome. Simultaneous
to the ongoing replication, 20 minutes of a fresh
round of replication is completed before the cell
divides. What would be the generation time of
E. coli growing at 37 degree C in complex medium? 
A.20 min                           B.40 min 
C.60 min                           D.30 min
Relevant answer
Answer
Maybe this article helps. In my opinion anything below 30min is insufficient even in complete medium.
  • asked a question related to Complexity
Question
1 answer
I have built the neurites using the option for the 'Run' dropdown option.
I have tried to 'set the order' of neurites (as 0 , with propagate) and used the neuite tracer to set the soma (this appears red).
I have also tried to ''set 1-selected as origin' from the Edit function dropdown but it still says there are mutiple origins in the model.
Relevant answer
Answer
I suppose, ANFIS method can be a good idea for Sholl Analysis with Z stack 3D images. I have used ANFIS for some similar works. You can take a glance at them.
  • asked a question related to Complexity
Question
8 answers
Dear friends and professors, I have a question. In fact, this is a question asked by a reviewer, and I have to address his/her concern.
I have coded a MIP model in the GAMS, and applied the Cplex as an exact method to solve instances. I am able to solve a relatively large-scale instance with about 23,000 decision variables and 4,000 constraints in less than a minute. I would like to ask for your clarification that how it would be possible to solve such a large-scale instance in less than one minute? Is there any specific reason?
His/her concern: "The model is really complex with thousands of constraints. I have difficulty in imaging that the exact method needs few seconds..."
I want to express my appreciation for your efforts in advance. In addition, I want to thank you for the privilege of your time.
Relevant answer
Answer
The time taken to solve a MILP depends not only on the size, but also on the gap between the LP bound and the optimum, and the proportion of the basic feasible LP solutions that are fractional. I have encountered MILPs with tens of thousands of variables that solve in a minute, and ones with only a few hundred variables that take hours.
  • asked a question related to Complexity
Question
4 answers
I’m currently involved in a research project that is related to Highly Integrative Questions (HIQ’s).
To define the landscape of those "next level client questions" we initiated a research:
How to define HIQ’s?
How to approach HIQ's?
What are cases that relate to HIQ’s?
How can we learn from those cases?
What kind of guidance and facilitation are needed in the process?
Some buzzwords: Complexity Theory, Integrative Thinking, Social Innovations
Relevant answer
Answer
@everyone thanks for the answers. This is exactly where our research is longing for.
Understanding Highly integrative Questions and complex system as prerequisite for innovating new services and products! I'm facinated by the Cynefin framework, because it shows different levels on how to approach innovations with systematic lenses...
@Brian I'm not really into deductive and inductive reasoning.
Integrative thinking in simple words is making an synthesis of A and B. It requires an ability of tension thinking for the synthesis. A good example can be found at the emergence of the Linux compay “Red Hat” and the thinking of their CEO Bob Young. It is also worth it to look at the work from Roger Martin.
  • asked a question related to Complexity
Question
3 answers
 In the reaction mixture, How to remove the excess addition of triethylamine?
Relevant answer
Answer
It depends on the solubility of the compound you are trying to isolate and the protonation state of the NEt3. If it deprotonated, you can use vaccuum (bp is about 90 degrees) or rinse with a non-polar solvent like ether. If it is protonated, you can wash with water, methanol, ethanol, etc. 
  • asked a question related to Complexity
Question
2 answers
If gold iii solution is preferred then how will prepare 0.05M solution in 30ml
Relevant answer
Answer
Typically Au(III) people we use K[AuCl4] as gold source for our chemistry. This salt is soluble in water and other polar solvents. Of course its difficult to suggest alternatives without knowing a bit more about the particular reaction that you want to carry out.
  • asked a question related to Complexity
Question
1 answer
We are trying to measure the concentration of Fe(III)EDTA complex which is converted to Fe(II)EDTA complex after a reaction. So, by this measurement, we will be able to calculate the extent of the reaction.
Relevant answer
Answer
Please a link..below:
Chelatometric Determination of Ferrous Iron with 2-Pyridinealdoxime as an Indicator, journal of Pharmaceutical Science ,September 1963 Volume 52, Issue 9, Pages 858–860
Another interesting PDF enclsoed ...
  • asked a question related to Complexity
Question
5 answers
I want to know that how to search the paper about the algorithm complexity. such as Genetic Programming, Kalman filter, Particle filter and  so on.  I can't search the paper about it. Thx.
Relevant answer
Answer
Given the usual choices (point mutation, one point crossover, roulette wheel selection) a Genetic Algorithms complexity is O(g(nm + nm + n)) with g the number of generations, n the population size and m the size of the individuals. Therefore the complexity is on the order of O(gnm))
  • asked a question related to Complexity
Question
2 answers
Thorin (IUPAC: Disodium 3-hydroxy-4-[(2-arsonophenyl)diazenyl]naphthalene-2,7-disulfonate)
Relevant answer
Answer
Thorin is a tridentate ligand; barium can coordinate two of them in an octahedral coordination. For discussion of tridentate azo compounds, see Comprehensive Coordination Chemistry, ed-in-c. Sir G. Wilkinson, Pergamon, 1987, vol. 6, Section 58.2.3 pp.46ff.
  • asked a question related to Complexity
Question
3 answers
I am looking for a library to be integrated in LabWINDOWS CVI.
Relevant answer
  • asked a question related to Complexity
Question
2 answers
When I want measured emission wavelength of solution of molecule by spectrofluorometer, that need  inter value of excitation wavelength .
Now, How I can determine and measured excitation wavelength of solution of complex or molecule?
Relevant answer
Answer
Yuri Kalambet 
Thank so much for reply, 
  • asked a question related to Complexity
Question
1 answer
I need it for a managing complex events for a transportation project
Relevant answer
Answer
We are using Odysseus (developed by researchers at University of Oldenburg) data stream management and processing engine for real-time analysis and feed-back mechanisms. It is freely available and generates high-order information adequately. It also provides flexible processing concepts and efficient resource management tools.
  • asked a question related to Complexity
Question
2 answers
And it is becomming more and more complex with time. 
Relevant answer
Answer
  • asked a question related to Complexity
Question
5 answers
Soil as a reference is very complex and need to be simplified for experimental purposes. What could be a true substitute for this soil medium.
Relevant answer
Answer
Very valid point Dr Senapati . I dont think  , there is any substitute to soil , considering  all complexities involved in unraveling the adsorption and exchange chemistry , associated with soil fertility dynamics and plant  nutrition . Another reason , i strongly feel , our commercial  crop growing medium is undoubtedly soil , studying crops behavior without soil , will again put the agriculture  practitioners in some state of dilemma about dos and donts...However, for experimental purpose , we can ease out  the complexities ... 
  • asked a question related to Complexity
Question
7 answers
Human thought works in a similar way wherever it is
Relevant answer
Answer
Ismail Noori Mseer Sir,
culture is the full range of learned human behavior patterns. Culture is also defined in terms of intercultural communication. Culture adds to the notion of communicative competence and enlarges it to incorporate intercultural competence. It builds understandings about  their own and others’ cultural traditions, values and beliefs. It involves processes that may lead to an enhanced ability to move between cultures and to cultural change. As Language teacher,culture can be viewed as the customs, traditions or practices that people carry out as part of their everyday lives. Culture refers to knowledge and skills that are more generalize in nature and transferable across cultures. This body of knowledge includes among other things, the concept of culture, the nature of cultural adjustment and learning, the impact of culture on communication and interaction between individuals or groups, the stress associated with intense culture and language immersions , coping strategies for dealing with stress, the role of emotions in cross-cultural, cross-linguistic interactions and so forth.
  • asked a question related to Complexity
Question
6 answers
It seems that human brain uses hierarchy, between other tools, to reduce complexity when analyzing information. What are the neurological basis of this fact?
Relevant answer
Answer
For example chunking. Whenever a pattern of symbols
repeats, it can be replaced by a single symbol.
If you see a phone number
0 1 2 3 4 5 6 7 8 9 7 3, you can easily remember
it although the amount of symbols seems to
exceed working memory capacity. With the help
of repeated chunking you get a tree-like
data structure.
Regards,
Joachim
  • asked a question related to Complexity
Question
3 answers
Hi, all, i want to get single crystal for my Ru(II) complexes. I really want to know how ? for this type of complexes
Relevant answer
Answer
Hi Sir.. i hope you can try a solvent mixture of DCM( dichloro methane) and aceto nitrile or acetone and aceto nitrile, but you have to make sure for slow evaporation process so that the crystals grow thick.
  • asked a question related to Complexity
Question
1 answer
Hi everyone,
I´m looking for an experimental task which can be altered for two conditions: High vs. low complexity/difficulty.
Paper-pencil as well as electronic is possible (ideally it is free of charge and easy to implement).
I´m looking forward to your suggestions and thank you all so much in advance!
Cheers,
mp
Relevant answer
Answer
Try to play with relations by adding/ removing. Most daily tasks are loosing/ gaining complexity by removing/ adding a relation between objects. (if you are not bonded by classic vs. technical)
  • asked a question related to Complexity
Question
1 answer
Hi,
A treble quantum system is a complex case, I have worked on such system but still I need more opinions and I want to share my acknowledgments with any interesting researchers to strength our back grounds in this field.  
Relevant answer
Answer
I think that a single quantum system is enough. You need to talk to Eric Verlinde. He is promoting information in primordial Qbits as the thing that got entangled and caused the universe and everything. It is a pity he had to put acceleration in by hand though.
  • asked a question related to Complexity
Question
17 answers
Dear colleagues,
It has been proposed that no single statistical measure can be used to assess the complexity of physiologic systems. Furthermore, it has been demonstrated that many entropy-based measures are "regularity" statistics, not direct indexes of physiologic "complexity". Finally, it has been stressed that increased/decreased irregularity does not imply increased/decreased physiologic complexity. However, it is common to keep finding interpretations such as, "a decreased/increased entropy value of beat-to-beat interval series (RR sequences) reflects a decreased/increased complexity of heart rate variability", and even more as "this reflects a decreased/increased complexity of the cardiovascular control". So, which entropy-based measures actually quantifies time series complexity? Moreover, is it appropriate to interpret that because of a decreased/increased complexity in heart rate variability there is a decreased/increased complexity of the cardiovascular control?
Thanks in advance,
Claudia
Relevant answer
Dear Claudia,
Sorry for posting into your topic belatedly. However, this is a quite interesting topic and I am also looking for deepen the understanding. Here is my humble opinion.
The theory of physiologic complexity, as you stated, is supported by the idea that healthy systems will always be the most complex ones, because the multilevel interactions (crosstalks) and regulating mechanisms are at best performance in such organisms/systems. Any breakdown or function loss in one or more of these regulatory mechanisms causes a decrease in the complexity. Therefore, diseases and aging is associated to complexity loss.
Some studies revealed that entropy can either increase or decrease in pathological situations. Considering the idea of physiologic complexity, entropy cannot be assumed as a complexity index. Moreover, surrogate data tends to increase entropy (compared to original signal) but the original signal is considered more complex than its surrogate, as it contains the information which was destroyed in surrogate data.
So, why some authors still associate entropy to complexity? Because complexity can be interpreted in different ways. If you consider that the complexity of the system can be solely characterized by the signal dynamics you are analyzing, and not by the system itself, you may assume that the higher unpredictable the signal, the higher the complexity. In this case, entropy can be considered a measure of complexity. This is often called "dynamical complexity".
If we refer to proper books and papers, we will find that "complexity" does not have an universal definition. Instead, people describe what properties the complex systems usually show, which basically are:
1) Many interdependent elements, interacting each other through nonlinear rules;
2) Structures over several scales;
3) Emergent behavior;
When we expand it to living organisms, this is clear that our systems (humans, animals) fit as complex systems. But, how could we detect this complexity from time series? From the many measurements proposed to extract information from those time series, e.g. asymmetry, fractals, entropies. And how many and which information are necessary to characterize complexity in time series? I think this is a very good question!
The final point is: can only heart rate variability reflect the system complexity? Although powerful, this is a single variable taken to characterize a very complex organism. It is more or less the same of reducing the dimension used to characterize some object in space. Information will be lost. Multivariate analysis may be more powerful, taking into account more than one variable concomitantly to assess the dynamics of the system. For example, one can record respiration, arterial pressure, ECG and EEG, simultaneously, and use some methodology to characterize the complexity. However, due to several limitations, in many situations it is not possible to collect more than the ECG, from which we can calculate heart rate variability.
  • asked a question related to Complexity
Question
1 answer
I am calculating average binding energy for protein-protein complex using g_mmpbsa.
I have used 0ns to 10ns and 10ns to 20ns and 500frames. I have got different average binding energy for both case. and the different is really significant. 
Relevant answer
Answer
Sir, My area is graph Theory .
  • asked a question related to Complexity
Question
8 answers
i.e. how to know that which particular algorithm suits the given problem to get the exact optimal solution with minimal time complexity
  • asked a question related to Complexity
Question
8 answers
Suggest T-symmetry Hamiltonians having real spectra .
Relevant answer
Answer
@Nikolay
H=0 does not make any sense .Pl  suggest another one.
Prof Rath
  • asked a question related to Complexity
Question
3 answers
In terms of Algorithm can anybody tell me about the significance of both runtime complexity.
Relevant answer
Answer
Dear Syed Taimoor Ahmed,
I think, NO. Because big-theta is a combination of big-o and big-omega. Big-o is used to show upper bound of the complexity of the algo. But  big-theta is used to show average case of complexity of the algo. It contain both upper and lower bound. 
So.
Big-theta(nlog n)=   
C1*n log n <= Big-theta(nlog n) <= c2* n log n
you can't write like that
C1*n log n <= Big-theta(nlog n) <= big-0(n log n)
  • asked a question related to Complexity
Question
3 answers
Complex 1 is [Cu(ip)2H2O](ClO4)2
Complex 2 is  [Cu(dppz)2H2O](ClO4)2
Based on absorption spectral techniques i have studied binding ability of these complexes with DNA. A reviewer has raised a question as why complex one shows two absorption peak and complex 2 shows one peak.
 kindly help me to get the answer.
Relevant answer
Answer
Dear Dr. Kumar:
The band centered in 252 nm is sharp and intense being completely resolved in wavelength domain. An additional band ascribed to the structural distortion of complex should be exbibits contrary features.
Yet in the thinking line, in which the simplest answer is the most probable, let’s suppose that you make use aqueous solution with classical relation.
It is possible that your compound exhibits a low solubility in water, verify.
In the above sense, proceed more investigations at about intrinsic optical features of DNA-solution (aqueous) investigating if DNA-solution have an absorption band at 250-265 nm, it is probable. Then, second band intense detected in spectrum centered at around 252 nm would belong to DNA-solution.
Best regards
Marcos Nobre
  • asked a question related to Complexity
Question
4 answers
Can anyone explain me what could had happened according to my above points.
Relevant answer
Answer
Question may be more clarified, is it OK?
  • asked a question related to Complexity
Question
2 answers
Hello Sir,
  1. How to find out Maintainability Index, Cyclomatic Complexity, DIT, through open source data ?
  2. I have done this work but i couldn't find Maintainability Index .What tool should I use to calculate Maintainability Index,Cyclomatic Complexity?
Relevant answer
  • asked a question related to Complexity
Question
6 answers
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?  
Can any one suggest me any paper regarding this topic? 
Thank you in advance. 
Relevant answer
Answer
That depends on what "more" means. :-) If "more" is a constant (finite) number then it means that you solve a finite number of NP-complete problems - which is still NP-complete.
  • asked a question related to Complexity
Question
5 answers
Hi, literature have shown using CVF, We are about to measure Managerial Complexity (Lawrence et al., 2009)
Any instrument for measuring employee's complexity?
Relevant answer
Answer
I have to agree with Xavier.  Questionnaires are easy, but not necessarily valid.
  • asked a question related to Complexity
Question
4 answers
Dear all,
         Where can I find the complex refractive index of Ge and Te in Terahertz (THZ) region like (0.1 to 2.5 THZ)?
          Thanks.
Relevant answer
Answer
Database, or academic articles. Just go searching with key words 'Terahertz database', or 'Terahertz semiconductor', etc.
  • asked a question related to Complexity
Question
5 answers
  1. Is constant of integration must be real?
  2. Suppose the constant of integration is complex then what happened?
  3. To next question, please see the file attachment
Relevant answer
Answer
Dear Colleague,
writing y^(1/2) = x+c and looking for such a solution where x and c would be real, then y^(1/2) should be real, and y should be positive as a square of a real valued function. If you have y(0) = -1 < 0, then this cannot happen.
Sincerely, Octav
  • asked a question related to Complexity
Question
2 answers
Dear researchers,
It is an usual trend that bimetallic complexes showed good cytotoxicity compared to the monometallic analogs. If trend reverse means what may be possible reasons. 
Relevant answer
Answer
Normally Bimetallic complexes posses greater cytotoxicity when compared to mononuclear complexes.  
The liphophilicity of bimetallic complexes  which increases the permiablity of the cell membarane which favours the good anticancer property. 
  • asked a question related to Complexity
Question
3 answers
Hello,
    I want to know what is the computational complexity of RBF SVM is it O(n^2) or O(n^3) and what is n here? Is it number of training data?
Thanks
Relevant answer
Answer
you can verify this by the condition number of the moment matrix
  • asked a question related to Complexity
Question
3 answers
In what sense is "irreducibly complex" synonymous with "NP-complete"?
In what sense is "complicated" just "computationally-intensive but solvable"?
Relevant answer
Answer
It appears to me that, in the paper you cited, Chaitin uses the phrase "irreducibly complex" as a synonym for "algorithmically random." This is quite different from computational complexity such as NP-completeness.  Algorithmically random is defined in terms of Kolmogorov complexity.  The prefix-free Kolmogorov complexity,  K(s) of a binary string, s, is defined to be the length of the shortest program that outputs s.   Prefix-free means that no program is a prefix of another program. We assume a fixed optimal computer. Changing optimal computers changes the Kolmogorov complexities of strings, but the change is bounded by a single constant for all strings.   Results in Kolmogorov complexity are usually stated "up to an additive constant"  just as results in analysis of algorithms are usually stated "up to a mupltiplicative constant"  (big O notation).  An infinite binary sequence, x, is algorithmically random if there is a constant b such that for all n, K(x[n]) >= n - b, where x[n] is the initial segment of x consisting of the first n bits of x. This means that the initial segments of x cannot be described using short programs. The programs must be almost as long as the strings themselves. (The programs can be shorter only by the additive constant b.)  Algorithmically random sequences cannot be computable.  A computable sequence x can be described by a program of finite length, so an initial segment x[n] can be described by the finite program plus a description of n, which can described using less than 2log(n)+O(1) bits.  This implies x is not algorithmically random.  However, although algorithmically random implies not computable, the converse implication doesn't hold.  For instance, if x=x0, x1, x2, ... is algorithmically random, x0, 0, x1, 0, x2, 0, ... is not computable, but is not algorithmically random since an initial segment of length n only contains n/2 bits of information.   Algorithmically random sequences are upredicatable in the sense that no program can predit the bits of the sequence with better than 50% accuracy.
  • asked a question related to Complexity
Question
3 answers
  1. If different different uml class diagram and the size metrics and complexity?Which type of conclusion we will reached?
Relevant answer
Answer
You can use "use-case points" or "class points" as a size metric in object-oriented projects. You can estimate the size of the project with the use of these metrics. 
  • asked a question related to Complexity
Question
1 answer
As, to determine the kf, we can calculate it from "half-wave" potential, but in this case we have peak potential (Epa) because the reaction is irreversible. Can we assume the Epa as half-wave potential ???
Relevant answer
Answer
Its a very conductive environment and so typical views of antenna design and propagation aren't as important. Omega had a wavelength of 23km. So many situations would only be a few wavelengths away. So it's almost a near field analysis but we are more interested in skin depth than most other things (a lot more than antenna tuning). DB
  • asked a question related to Complexity
Question
7 answers
I want to cleave tertiary amine and borane complex
Relevant answer
Answer
Thank you Ercolani and Talaat. I'm trying the same and once I get the result I'll let you know the strongnes of the borane-amine bond
  • asked a question related to Complexity
Question
5 answers
benzoyl thiourea ligands +Cu(II) perchlorate ------ Cu(I) complex
Relevant answer
Answer
Moreover, now I can add, that reduction of copper(II) acetate with Ph3P can give you copper(I) complex of composition [Cu(ac)(Ph3P)3} or mixed-valence copper(I,II)  of composition [Cu4(ac)6(Ph3P)4] (Valigura et al., J. Chem. Soc., Dalton Trans., 1986(11), 2339-2344.) The Cu(II) structure mentioned in previous part was published two years later by Koman et al.,  Acta Crystallogr., C44(4), 601-603 (1988). 
  • asked a question related to Complexity
Question
3 answers
How was Bactrocera invadens discriminated from its sister taxa in the B dorsalis complex? Was it properly described or done?
Relevant answer
Answer
In a recent paper by Schutze et al. (Systematic Entomology, 2014, DOI: 10.1111/syen.12114), Bactrocera invadens was synonymized under Bactrocera dorsalis.
  • asked a question related to Complexity
Question
3 answers
-I'm trying to synthesis organometalic complexes between [1,2 bis( diphenylphosphino) ethylene] with (Mo,Cr,W) .please can any one help me to know how these complexes can be prepared using nanomethods ,and you may recomend me some paper in this field!
Relevant answer
Answer
I expect that starting with the hexacarbonyl (WCO6, MoCO6) you could react with the olefin. PK Baker wrote several papers describing the formation of W(CO)3(NCMe)3 (72 h reflux in acetonitrile) then react at ice temperature with iodine to obtain the 7 coordinate WI2(CO)3(NCMe)2 complex. MM Meeham and Baker then reported reacting this complex with various olefins, including the one you described to obtain a complex similar to your target. Baker was my original PhD supervisor and wrote many papers in this area.
  • asked a question related to Complexity
Question
1 answer
I am going to do a research about Aphelinus, but I can't find keys of America. I have keys about the complex and some countries like India, Egypt and Israel. 
  • asked a question related to Complexity
Question
5 answers
Would it be a combination of more than one machine or just one?
Which would be most efficient?
Relevant answer
Answer
Your question reduces to an old issue in computability theory by the simple realization that all possible arrangements of an object can be reduced to 6 numbers, 3 for coordinate position, and 3 for angular orientation, thus the arrangement of objects reduces to the problem of generating random numbers, which is well studied.  See https://en.wikipedia.org/wiki/Pseudorandomness for an overview.
There is no distinction in computability theory between one machine with two (or more) parts, and two machines.
Pseudorandom is more useful for testing and comparing various theories or systems, since it is statistically random, but actually repeatable.  A pseudorandom generator with an external uncontrolled seed, such as timing of keystrokes, may be what you are thinking of with "two machines."  Making quantum measurements is thought to be truly random, and now days is not too hard to do. 
But the real issue is, what are you going to use it for?  In economics, usually a sequence of numbers is not independent.  Many theorems assume independence of samples, but even such a simple thing as investment returns violates this assumption.  Next year's returns, assuming re-investment, depend to an extent on this year's returns. 
  • asked a question related to Complexity
Question
6 answers
Is it true that any monic polynomial of degree 6 with complex coefficients can be represented as a sum of at most three summands each being the cube of at most quadratic polynomials? 
B.Shapiro/Stockholm/
Relevant answer
Answer
 Dear Professor Breuer,
I am a professional and know well what kind of questions one should put to a broad audience to stimulate people's interest. The question I posed is  elementary and unsolved and does not have a long history behind it. It might have an elementary answer (which I could not find). If somebody has any reasonable suggestion  about this case, I can easily generalize it to a small theory.
As for the style of questions, my former advisor Professor Vladimir Arnold (the famous one) once said that there is a Russian style of formulating mathematical questions and a French style. The Russian style (which I try to follow) is to formulate a mathematical problem in its simpllest unknown case. The French style is to pose a problem in its maximal generality.
Warmest regards, B.Shapiro
  • asked a question related to Complexity
Question
7 answers
Say we have a complex network made of n sub-networks and m nodes. Some of the sub-networks share some of the m nodes. Say that such complex network (aka Interdependent Network) is under attack, and say that this attack is not targeted (e.g., does not look for high degree nodes only) nor random, but spatial (both low degree and high degree nodes are being removed). Now, say that the failure cause is external, in addition to being spatial, and that it can feature many levels of spatial extent. Hence, the higher the level, the higher the number of nodes involved, the higher the disruption (theoretically). My problem relates to the failure threshold qc (the minimum size of the disrupting event that is capable of producing 0 active nodes after the attack).
My question: does the failure threshold qc depend on how nodes are connected only (e.g., the qc is an intrinsic feature of the network)? Or is it a function of how vast the spatial attack is? Or does it depend on both?
Thank you very much to all of you.
Francesco
Relevant answer
Answer
Yes it's likely to depend on the structure of the network, as it does for a simple network. The situation you describe is quite specific, so I'm not sure the answer is known. The best way is to try to find out!
You might find this review helpful, for some basic ideas http://arxiv.org/abs/0705.0010
  • asked a question related to Complexity
Question
29 answers
One of the interesting classes  is class of quotients of complex polynomials. Is this class dense in C(S^2, S^2) in compact open topology?
Relevant answer
Answer
Rogier Brussee  is right:
the meromorphic functions are not dense in the continuous functions C^0(S^2, S^2),. This is because the degree of a meromorphic function (i.e a holomorphic map CP^1 to CP^1) is non negative. But as I said before  the set of rational functions  R(S^2, S^2)  on $S^2$ is closed  with respect to spherical metric.  Of course there is  a continuous function in  C^0(S^2, S^2)  which is not rational and there fThe meromorphic functions are not dense in the continuous functions . Perhaps  an iteresting question is:
whether  every homeomorphisam can be aproximated by quasiconformal mappings on $S^2$.
  • asked a question related to Complexity
Question
2 answers
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required.  It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
Relevant answer
Answer
Anthony, there are other posts that I've made where it is determined that the sequence is a de Bruijn sequence.  What is most important about my work is the speed with which such a sequence can be generated; for 100 million digit de Bruijn sequence, my software produces same in less than 30 minutes.  Also, while random is not computable, it is also true that random is biased from maximal disorder.  Thanks for the reply.
  • asked a question related to Complexity
Question
1 answer
Has anyone tried geometry optimization in Unigraphics NX? If yes could share the complexity you have tried and has it fetched you any good results?
Thanks for your time.
Relevant answer
Answer
I am interested to your question (similar to my case).and will explain in detail soon. Bit hurry for this time, please consult me.
  • asked a question related to Complexity
Question
4 answers
I am using A* algorithm in my research work. Please suggest some  research paper or article which prove the A* algorithm complexity.
Relevant answer
Answer
 With many heuristics, because of their nature, sometimes is better to calculate "the complexity" based on the number of iterations (or the number of objective function calculations) that the algorithm require. The algorithm complexity analysis for heuristics is kind of tricky and should be associate with other metrics.
  • asked a question related to Complexity
Question
4 answers
Hi all
I know this replacement is because of the complexity in spiking neurons computations and because many supervised learning algorithms use gradient based methods, then its difficult to use such a complex models for neurons. Here I have two questions:
1) If we use a simple model (like Izhikevich model), then do we have to use such substitution too?
2) Is this replacement just for supervised learning algorithms? or in unsupervisedis it also necessary? Considering in unsupervised there is no gradient and back-propagation (If I think right!!!)
please help me.
Relevant answer
Answer
thank you
  • asked a question related to Complexity
Question
14 answers
Anybody with a good open-source remeshing software? The Meshlab recipe is eminently complex and doesn't work very well. Magics remeshing functionality is disabled for now. Hypermesh is not open source. Looking for alternatives...
Relevant answer
Answer
You might be interested in gmsh or netgen.
  • asked a question related to Complexity
Question
2 answers
Hello,
Does anyone knows what does w[1]-hard means in the context of parametrized complexity?
Relevant answer
Answer
Thank you for your kind reply.
  • asked a question related to Complexity
Question
7 answers
I am trying to inhibit the APC in 3rd instar larval Neuroblasts, but conventional APC inhibition with MG132 does not seem to work... Any suggestions would be highly appreciated.
Relevant answer
Answer
The best is to use mutants or RNAi (vddc lines) against APC components (cdc20 or APC subunits). it worked cell in our hands (strong metaphase arrest).
  • asked a question related to Complexity
Question
6 answers
How can I remove unreacted 1-5 diphenyl carbazide? Would  it also dissolve in most solvents?.
Relevant answer
Answer
Thanks you are opinion
  • asked a question related to Complexity
Question
11 answers
If right hand limit and left hand limit exist, that means the limit 'exists'.
But, this is wrong.... Why??
Even in complex case, lim (z->0), (x^2y)/(x^4+y^2) does not exist... coz depending on the path, the values of limit changes.
Why does that mean the limit does not exist?
Relevant answer
Answer
"For every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < | x − p | < δ implies | f(x) − L | < ε. "
To expand on this in a way is how I think more texts should immediately start after introducing he above is the "limit definition", lets examine this idea of having a different right-hand and left-hand limit.
Suppose that as x approaches a our function approaches R, the right hand limit, AND IT ALSO approaches L, the left-hand limit as x approaches a from the other direction. Using the limit definition, that means that for the right-hand limit R there must exist a δ1 such that for all x, if  0<|x-a|<δ1, then |f(x)-R| < ε. In other words, for R to be a limit, it has to satisfy the definition.
Likewise, there must be a δ2 such that for all x, if  0<|x-a|<δ2 then |f(x)-L| < ε.
However, the limit definition demands we have a δ such that this:
(1) for all x, if  0<|x-a|<δ, then  |f(x)-R| < ε AND  |f(x)-L| < ε
or that both the right-hand limit R and the left-hand limit L satisfy the limit definition. So we take the minimum (δ12) and let δ equal that minimum. Now we can see if it is possible for (1) to be true. We've got our δ, so now we look for a good ε. Since we are asserting that (1) holds true for two different limit values R & L, we can set ε = |R-L| /2 or the difference between the two limit values dived by 2 (as the difference must be positive, because R is not equal to L). so now we have for all x
(2) if  0<|x-a|<δ, then  |f(x)-R| < |R-L| /2
AND
 |f(x)-R| < |R-L| /2
If there can be a right-hand and left-hand limit that have different values but the limit exists, then (2) should be true, as the limit definition requires that for ALL ε>0, we find a δ>0 as in (2). But (2) CAN'T be true, for if  0<|x-a|<δ then
(3)  |R-L|= | R- f(x) + f(x) - L | <= | R- f(x)| + |f(x) - L | < |R-L| /2 + |R-L| /2 = |R -L|
It's kind of hard to see given the make-shift math symbols and font, but the above says that |R-L| is less than something it is equal to, a blatant contradiction. Therefore, it cannot be that there exists any two limits R & L unless R=L.
I am generally of the opinion that speaking of right-hand or left-hand limits isn't very helpful to understanding limits but can quite easily make things more difficult, as the moment one is talking about limits in higher dimensions there's not really any analogue, just ways in which thinking about a function approaching from only two directions can lead you astray. Generally, calculus classes rely on a lot of pre-calculus mathematics so that the more difficult calculus concepts can handled by learning a bunch of rules. The limit definition perfectly supplied by Weiguo Xie can be quite conceptually challenging, so often enough little time is spent using it and that little time involves doing practice problems that don't really help you understand what limits are (most of the "big name" calculus textbooks have few if any problem sets that include ε-δ proofs, one of the few kinds of problems that can really help one understand the nuances of limits). However, all of calculus and analysis are built upon them.
  • asked a question related to Complexity
Question
2 answers
How to get stable Sr-Xylenol Orange complex.
Regards,
Rohit Dave
Relevant answer
Answer
Thanks Dr. Ravi