Science topics: Theoretical PhysicsComplexity
Science topic
Complexity - Science topic
Explore the latest questions and answers in Complexity, and find Complexity experts.
Questions related to Complexity
I'm thinking of evolution and if there is some end point?
COMPLEXITY IN SCIENCE, PHILOSOPHY, AND CONSCIOUSNESS:
DIFFERENCES AND IMPORTANCE
Raphael Neelamkavil, Ph.D., Dr. phil.
1. Introduction
With an introductory apology for repeating a few definitions in various arguments here below and justifying the same as necessary for clarity, I begin to differentiate between the foundations of the concept of complexity in the physical sciences and in philosophy. I reach the conclusion as to what in the concept of complexity is problematic, because the complexity in physical and biological processes may not be differentiable in terms of complexity alone.
Thereafter I build a concept much different from complexity for application in the development of brains, minds, consciousness etc. I find it a fine way of saving causation, freedom, the development of the mental, and perhaps even the essential aspects of the human and religious dimension in minds.
Concepts of complexity considered in the sciences are usually taken in general as a matter of our inability to achieve measuremental differentiation between certain layers of measurementally integrated events within a process or set of processes and the same sort of measurementally integrated activities within another process or set of processes.
But here there is an epistemological defect: We do not get every physical event and every aspect of one physical event to measure. We have just a layer of the object’s total events for us to attempt to measure. This is almost always forgotten by any scientist doing complexity science. One tends to generalize the results for the case of the whole object! Complexity in the sciences is not at all a concept exactly of measurement of complexity in one whole physically existent process within itself or a set of processes within themselves.
First, what is termed as complexity in an entity is only the measure of our inability to achieve measurements of that part of a layer of process which has been measured or attempted to be measured. Secondly, always there is a measuremental comparison in the sciences in order to fix the measure of complexity in the aspects that are measured or attempted to measure. This is evidently a wrong sort of concept.
The essential difference here must be sharpened further. As a result of what is said above, the following seems more appropriate. Instead of being a measure of the complexities of one or a set of processes, complexity in science is a concept of the difference between (1) our achieved abilities and inabilities to achieve the measurement of actual complexity of certain levels of one physical process or a set of processes and (2) other types of levels of the extent of our ability and inability to measurement within another process or set of processes. This is strange with respect to the claims being made of complexity of whichever physical process a science considers to measure the complexity.
If a scientist had a genuine measurement of complexity, one would not have called it complexity. We have no knowledge of a higher or highest complexity to compare a less intense complexity with. In all cases of complexity science, what we have are just comparisons with either more or less intense complexities. This makes the concept of complexity very complex to deal with.
2. Is Complexity Really Irreducible?
On a neutral note, each existent physical process should possess great complexity. How much? We do not know exactly; but we know exactly that it is neither infinite nor zero. This truth is the Wisdom of complexity. Let us call it complexity philosophy. This philosophical concept of complexity within the thing itself (CI) is different from the methodologically measurement-based concept of complexity (CM) in the sciences. In CM, only the measured and measurable parts of complexity are taken into consideration and the rest of the aspects and parts of the existent physical process under consideration are forgotten.
If this were not true, the one who proposes this is bound to prove that all the aspects and parts of the physical process or at least of the little layer of it under measurement are already under any one or more or all measurementally empirical procedures with respect to or in terms of that layer of the process.
To explain the same differently, the grade of complexity in the sciences is the name of the difference (i.e., in terms of ‘more’ or ‘less’) between the grades of difficulty and ease of measuring a specific layer of causal activity within one process and a comparable or non-comparable layer of causal activity in another.
Both must be measured in terms of the phenomena received from them and the data created of them. Naturally, these have been found to be too complex to measure well enough, because we do not directly measure, but instead measure in terms of scales based on other more basic scales, phenomena, and data. But the measure-elements titled infinite-finite-zero are slightly more liberated of the directly empirically bound notions. I anticipate some arguing that even these are empirically bound. I am fully agreed. The standpoint from which I called the former as formed out of directly empirically bound notions is different, that is all.
Both the above (the grades of difficulty and ease of measuring a specific layer of causal activity within one process and a comparable or non-comparable layer of causal activity in another) must be measured in terms of certain modes of physical phenomena and certain scales set for these purposes. But this is not the case about the scale of infinity-finitude-zero, out of which we can eternally choose finitude for the measure of ease and difficulty of measuring a specific layer of causal activity without reference to any other.
The measure-difference between the causal activities is not the complexity, nor is it available to be termed so. Instead, complexity is the difference between (1) the ease and difficulty of measuring the one from within the phenomena issuing from certain layers of the physical process and the data created by us out of the phenomena, and (2) the ease and difficulties of measuring the same in the other.
In any case, this measure-difference of ease and difficulty with respect to the respective layers of the processes can naturally be only of certain layers of activity within the processes, and not of all the layers and kinds of activity in them both. Evidently, in the absence of scale-based comparison, their complexity cannot be termed a high or a low complexity considered within itself. Each such must be compared with at least another such measurementally determined layer/s of process in another system.
3. Extent of Complexity outside and within Complexity
The question arises now as to whether any process under complexity inquiry has other layers of activity arising from within themselves and from within the layers themselves from which directly the phenomena have issued and have generated the data within the bodily, conscious, and cognitive system of the subjects and their instruments.
Here the only possible answer is that there is an infinite number of such layers in any finite-content physical processual entity, and within any layer of a process we can find infinite other sub-layers, and between the layers and sub-layers there are finite causal connections, because every existent has parts that are in Extension and Change.
The infinite number of such complexity layers are each arrangeable in a scale of decremental content-strength in such a way that no finite-content process computes up to infinite content-strength. This does not mean that there are no actual differences between any two processes in the complexity of their layers of activity, or in the total activity in each of them.
Again, what I attempt to suggest here is that the measured complexity of anything or of any layer of anything is just a scale-based comparison of the extent of our capacity to discover all the complexity within one process or layer of process, as compared to the same in another process or layer of process.
4. Possible Generalizations of Complexity
Any generalization of processes in themselves concerning their complexity proper (i.e., the extent of our capacity to discover all the complexity within one process or one layer of activities of a process) must now be concluded to be in possession of only the quantitative qualities that never consist of a specific or fixed scale-based number, because the comparison is on a range-scale of ‘more than’ and ‘less than’.
This generalization is what we may at the most be able to identify regarding the complexity within any specific process without any measuremental comparison with another or many others. Non-measuremental comparison is therefore easier and truer in the general sense; and measuremental comparison is more applicable in cases of technical and technological achievements.
The latter need not be truer than the former, if we accept that what is truer must be more general than specific. Even what is said merely of one processual object must somehow be applicable to anything that is of the same nature as the specific processual object. Otherwise, it cannot be a generalizable truth. For this reason, the former seems to be truer than the latter.
Now there are only three possibilities for the said sort of more general truth on comparative complexity: accepting the infinite-finite-zero values as the only well-decidable values. I have called them the Maximal-Medial-Minimal (MMM) values in my work of 2018, namely, Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology.
Seen from this viewpoint, everything physically existent has great processual-structural complexity, and this is neither infinite nor zero, but merely finite – and impossible to calculate exactly or even at any satisfactory exactitude within a pre-set scale, because (1) the layers of a process that we attempt to compute is but a mere portion of the process as such, (2) each part of each layer has an infinite number of near-infinitesimal parts, and (3) we are not in a position to get at much depths and breadths into all of these at any time.
Hence, the two rationally insufficient conclusions are:
(1) The narrowly empirical-phenomenologically measuremental, thus empirically partially objective, and simultaneously empirically sufficiently subjective amount of complexity (i.e., the extent of our capacity and incapacity to discover all the complexity) in any process by use of a scale-level comparison of two or more processes.
(2) The complexity of entities without having to speak about their existence in every part in Extension-Change and the consequently evident Universal Causality.
These are the empirically highly insulated, physical-ontologically insufficiently realistic sort of concept of complexity that the sciences entertain and can entertain. Note that this does not contradict or decry technological successes by use of scientific truths. But claiming them to be higher truths on complexity than philosophical truths is unjustifiable.
Now the following question is clearly answerable. What is meant by the amount of complexity that any existent physical process can have in itself? The only possible answer would be that of MMM, i.e., that the complexity within any specific thing is not a comparative affair within the world, but only determinable by comparing the complexity in physical processes with that in the infinitely active and infinitely stable Entity (if it exists) and the lack of complexity in the zero-activity and zero-stability sort of pure vacuum. It can also be made based on a pre-set or conventionalized arithmetic scale, but such cannot give the highest possible truth probability, even if it is called “scientific”.
MMM is the most realistic generalization beyond the various limit possibilities of scale-controlled quantities of our incapacity to determine the amount of complexity in any layer of processes, and without incurring exact numbers, qualifications, etc. The moment a clear measuremental comparison and pinning up the quantity is settled for, it becomes a mere scientific statement without the generality that the MMM realism offers.
Nonetheless, measuremental studies have their relevance in respect of their effects in specific technological and technical circumstances. But it must be remembered that the application of such notions is not directly onto the whole reality of the object set/s or to Reality-in-total, but instead, only to certain layers of the object set/s. Truths at that level do not have long life, as is clear from the history of the sciences and the philosophies that have constantly attempted to limit philosophy with the methods of the sciences.
5. Defining Complexity Closely
Consider any existent process in the cosmos. It is in a state of finite activity. Every part of a finite-content process has activity in every one of its near-infinitesimal parts. This state of having activity within is complexity. In general, this is the concept of complexity. It is not merely the extent of our inability to measure the complexity in anything in an empirical manner.
Every process taken in itself has a finite number of smaller, finite, parts. The parts spoken of here are completely processual. Nothing remains in existence if a part of it is without Extension or without Change. An existent part with finite Extension and Change together is a unit process when the cause part and the effect part are considered as the aspects or parts of the part in question.
Every part of a part has parts making every part capable of being a unit process and in possession of inner movements of extended parts, all of which are in process. This is what I term complexity. Everything in the cosmos is complex. We cannot determine the level of complexity beyond the generalized claim that complexity is normally limited within infinite or finite or zero, and that physical and biological processes in the cosmos come within the finitude-limit.
Hereby is suggested also the necessity of combining the philosophical truth about complexity and the scientific concept of the same for augmentation of theoretical and empirical-scientific achievements in the future. While determining scientifically the various natures and qualities of complexity, chaos, threshold states, etc. in a manner not connected to the philosophical concept of it based on the MMM method of commitment access to values of content and their major pertinents, then, scientific research will remain at an elementary level – although the present theoretical, experimental, and technological successes may have been unimaginably grand. Empirical advancement must be based on the theoretical.
Constant effort to differentiate anything from anything else strongly, by making differentiations between two or more processes and the procedures around them, is very much part of scientific research. In the procedural thrust and stress related to these, the science of complexity (and all other sciences, sub-sciences, etc.) suffer from the lack of ontological commitment to the existence of the processes in Extension-Change and Universal Causality.
The merely scientific attitude is due to a stark deficit of the most general and deepest possible Categories that can pertain to them, especially to Extension-Change and Universal Causality. Without these, the scientist will tend to work with isolated and specifically determined causal processes and identify the rest as non-causal, statistically causal, or a-causal!
6. Complexity in Consciousness
The above discussion shows that the common concept of complexity is not the foundation on which biological evolution, growth of consciousness, etc. can directly be based. I have plans to suggest a new concept.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.
Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.
Experiments, simulations, and theoretical results are equally important.
Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.
It would be nice to get reading succestions addressing the topic why modelling societies and their dynamic developments may fail. What are the challenges and pitfalls when one attempts to create models that aim to forecast the future developments. Economic literature, system dynamic approaches, predictive social science may address these issues with modelling. I'm looking for good entry points to these discussion.
I reviewed many articles about the neurodegeneration of the human brain. Many researchers pointed out that complexity is a key feature in detecting alterations in the brain. However, the characteristic changes that occurred in the electrophysiological signals may depend on different factors that are shared by the subjects in the same group ( patient or control). Researchers apply some statistical tests to show obtained results are significant but I'm not sure whether the results are really related to the proposed research hypothesis ( Complexity changes with disease). We do not know that the subjects in one group drink coffee regularly while other group members do not. There are many possibilities like this for the people who participated in these experiments.
Now, this is my question;
What methodology can be utilized to ensure our hypothesis is REALLY true or not true at the end of the study? Do you have any suggestions to overcome this specific problem?
Thanks in advance.
Recently, I have seen in many papers reviewers are asking to provide computation complexities for the proposed algorithms. I was wondering what would be the formal way to do that, especially for the short papers where pages are limited. Please share your expertise regarding the computational complexities of algorithms in short papers.
Thanks in advance.
I have two type of resources A and B. The resources are to be distributed (discretely) over k nodes.
the number of resources A is a
the number of resources B is b
resources B should be completely distributed (sum of resources taken by nodes should be b)
resources A may not be completely distributed over the nodes. In fact, we want to reduce the usage of resources A.
Given resources (A or B) to a node enhance the quality of the nodes, where the relation is non-linear.
All nodes should achieve a minimum quality.
What is the type of the problem and how I can find the optimal value?
Problem description:
In socio technical systems an idea of technological initiative can emerge and different groups can be organizing around it. Each groups little by litle are organizing sponaneously based on common interest, shared values including ethics, around of an idea of progress and potential benefits that sometimes is vague.
Sometimes those groups start to interact each other and at certain point of development a macro context start to be needed in order to reach the necessities of the society.
Lately despite of the potential social benefits of the new technological initiative, the political body do not create the institutional conditions for the development of a new regulation and public policy (this is what I call the macrosystem). So the socio-technological initiative do not thrive.
Some of the hypothesis about why this issue is happening are:
1) Politicians do not take care or interest of the posibilities of the new technology and initiative.
2) Politicians sees the new technology as a loss of self power threat.
3) Politicians want to take control of the different technical groups resources and assets but not the values and real purpose, because they want to have more power for themselves.
4)...
In consecuence the work done by different technical groups will never be enough organized and coordinated as well as is required by a common purpose that reach societal necesities.
What I want to do is describe the problem in terms of the interaction of technological working groups (the system) and the political and policy level (the macrosystem)
Do yo know if there are a systemic theoretical framework that can help me to analyse and describe this problem and dynamic?
If multiple deep learning (DL) algorithms are merged together to create models then the system will be complex.
To analyze this, how to calculate complexity?
Is there any formal way or mathematical proof to analyze this kind of complexity of DL models?
Thanks in advance.
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
Antigens are processed by specialized macrophages to produce complex protein-RNA complexes that eventually produce iRNA. When this iRNA is introduced to primary B cells that have never seen the antigen, they produce specific antibodies to that antigen. Th macrophage produced RNA is incorporated into the the genome of these B cells by reverse transcriptase that now become "memory cells" capable of a secondary response when confronted with the antigen they had never come into contact before. Reverse ttranslation in the macrophage is the best explanation for the production of such specific iRNA.
We all know that poor hygiene within dense populations is a breeding ground for diseases. On the other hand, we know that some kind of medium level of exposure to infectious agents on a daily basis is strengthening immunity and keeps it alert.
The big question is what happens when we start to use sanitizers en masse within whole populations?
Hospitals have special procedures aiming at the rotation of chemically-based sanitizers to avoid the rise and promotion of drug-resistant bacteria and viruses.
How is this problem addressed within whole populations? This seems to be important to know right now during the COVID19 outbreak.
How to estimate the Big O complexity of feature selection for filter method based on mutual information (such a, mRMR, JMIM)?
I know how to dock the chemical compound with protein but i need server for metal complexes docking
When i compute the time complexity of cipher text policy attribute based encryption CP-ABE . I found it O(1) by tracing each step in code which mostly are assignments operations. Is it possible that the time complexity of CP-ABE be O(1) or i have a problem. the code that i used is the following, where ITERS=1.
public static List encrypt(String policy, int secLevel, String type,
byte[] data, int ITERS){
double results[] = new double[ITERS];
DETABECipher cipher = new DETABECipher();
long startTime, endTime;
List list = null;
for (int i = 0; i < ITERS; i++){
startTime = System.nanoTime();
list = cipher.encrypt(data, secLevel,type, policy);
endTime = System.nanoTime();
results[i] = (double)(endTime - startTime)/1000000000.0;
}
return list;
}
public List encrypt(byte abyte0[], int i, String s, String s1)
{
AccessTree accesstree = new AccessTree(s1);
if(!accesstree.isValid())
{
System.exit(0);
}
PublicKey publickey = new PublicKey(i, s);
if(publickey == null)
{
System.exit(0);
}
AESCipher.genSymmetricKey(i);
timing[0] = AESCipher.timing[0];
if(AESCipher.key == null)
{
System.exit(0);
}
byte abyte1[] = AESCipher.encrypt(abyte0);
ABECiphertext abeciphertext = ABECipher.encrypt(publickey, AESCipher.key, accesstree);
timing[1] = AESCipher.timing[1];
timing[2] = ABECipher.timing[3] + ABECipher.timing[4] + ABECipher.timing[5];
long l = System.nanoTime();
LinkedList linkedlist = new LinkedList();
linkedlist.add(abyte1);
linkedlist.add(AESCipher.iv);
linkedlist.add(abeciphertext.toBytes());
linkedlist.add(new Integer(i));
linkedlist.add(s);
long l1 = System.nanoTime();
timing[3] = (double)(l1 - l) / 1000000000D;
return linkedlist;
}
public static byte[] encrypt(byte[] paramArrayOfByte)
{
if (key == null) {
return null;
}
byte[] arrayOfByte = null;
try
{
long l1 = System.nanoTime();
cipher.init(1, skey);
arrayOfByte = cipher.doFinal(paramArrayOfByte);
long l2 = System.nanoTime();
timing[1] = ((l2 - l1) / 1.0E9D);
iv = cipher.getIV();
}
catch (Exception localException)
{
System.out.println("AES MODULE: EXCEPTION");
localException.printStackTrace();
System.out.println("---------------------------");
}
return arrayOfByte;
}
public static ABECiphertext encrypt(PublicKey paramPublicKey, byte[]
paramArrayOfByte, AccessTree paramAccessTree)
{
Pairing localPairing = paramPublicKey.e;
Element localElement1 = localPairing.getGT().newElement();
long l1 = System.nanoTime();
localElement1.setFromBytes(paramArrayOfByte);
long l2 = System.nanoTime();
timing[3] = ((l2 - l1) / 1.0E9D);
l1 = System.nanoTime();
Element localElement2 = localPairing.getZr().newElement().setToRandom();
Element localElement3 = localPairing.getGT().newElement();
localElement3 = paramPublicKey.g_hat_alpha.duplicate();
localElement3.powZn(localElement2);
localElement3.mul(localElement1);
Element localElement4 = localPairing.getG1().newElement();
localElement4 = paramPublicKey.h.duplicate();
localElement4.powZn(localElement2);
l2 = System.nanoTime();
timing[4] = ((l2 - l1) / 1.0E9D);
ABECiphertext localABECiphertext = new ABECiphertext(localElement4, localElement3, paramAccessTree);
ShamirDistributionThreaded localShamirDistributionThreaded = new ShamirDistributionThreaded();
localShamirDistributionThreaded.execute(paramAccessTree, localElement2, localABECiphertext, paramPublicKey);
timing[5] = ShamirDistributionThreaded.timing;
return localABECiphertext;
}
}
public ABECiphertext(Element element, Element element1, AccessTree
accesstree)
{
c = element;
cp = element1;
cipherStructure = new HashMap();
tree = accesstree;
}
public void execute(AccessTree accesstree, Element element,
ABECiphertext abeciphertext, PublicKey publickey)
{
pairing = publickey.e;
ct = abeciphertext;
PK = publickey;
countDownLatch = new
CountDownLatch(accesstree.numAtributes);
timing = 0.0D;
double d = System.nanoTime();
Thread thread = new Thread(new Distribute(abeciphertext,
accesstree.root, element));
thread.start();
try
{
countDownLatch.await();
long l = System.nanoTime();
timing = ((double)l - d) / 1000000000D;
synchronized(mutex)
{
}
}
catch(Exception exception)
{
exception.printStackTrace();
}
}
Creative destruction rages on: the bureaucratic, the inflexible, and the slow are disappearing in record numbers. The average lifespan of a Fortune 500 company is less than 20 years, down from 60 years in the 1950s, and is forecast to shrink to 12 years by 2027. If organizational leadership is—at least in part—about identifying what needs to be done, creating vision, setting direction, and guiding people toward that, one should query whether the leadership styles that developed yesteryear (e.g., authentic, authoritarian, democratic, laissez-faire, paternalistic, servant, transactional, transformational, etc.) still contribute to the long-term success of collective effort in more open systems. Given the volatility, uncertainty, complexity, and ambiguity of the environment, understanding how to lead organizations in the Age of Complexity is urgent and important to societies, economies, and governments.
Volatility, uncertainty, complexity, and ambiguity call for different ways of perceiving the world, different approaches to sense and decision making, and different modes and combinations of leadership.
- Administrative Leadership. Administrative leadership is the managerial approach followed by individuals and groups in formal roles as they plan and coordinate activities in standardized business processes to accomplish organizationally-prescribed outcomes efficiently and effectively.
- Adaptive Leadership. Adaptive leadership is the informal process that emerges as organizations generate and advance ideas to solve problems and create opportunity; unlike administrative leadership, it is not an act of authority and takes place in informal emergent dynamic among interactive agents.
- Enabling Leadership. Enabling leadership is the sum of actions to facilitate the flow of creativity (e.g., adaptability, innovation, and learning) from adaptive structures into administrative structures; like adaptive leadership, it can take place at all levels of an organization but its nature will vary by hierarchical level and position. (Uhl-Bien, Marion, & McKelvey, 2007)
If We want to compare the performance of two classifiers based on their time measurement criteria, what is the best way ?
Is there any existing methodology ?
"Mistake 2: Not building flexible career capital that
will be useful in the future" (80000hours.org Career guide). My question: What are the reasons behind this mistake?
--
Background
I'm trying to understand what the reasons that lead people to this mistake are. Maybe after we understand the reasons, we would realize that they're not mistakes after all. Or if we still believe they're mistakes, we'll be much able to solve them.
Here are some general reasons:
1. Trade-offs: Advice often neglects to address the trade-off that comes with this advice: For example, "be flexible" ignores the disadvantages of being flexible and the advantages of being "inflexible" (keeping your eye on the goal, avoiding distractions, persistence etc…) and vice versa with persistence advice like "never give up".
2. Unclear evidence or debatable positions
Often contrary or seemingly contrary positions both have evidence.
Do we underestimate or over-estimate the differences between us and others? The "False Consensus Effect" suggests that we under-estimate while the "Fundamental Attribution Error" can imply that we over-estimate the role of personal differences.
--
So even though the position behind the advice has evidence, it can also be true that the position contrary to the advice has evidence too.
3. Lack of knowledge or effort.
4. More pressing issues
The question then becomes: is : Does the advice that comes with "Not building flexible career capital that will be useful in the future" suffer from general reasons 1+2?
Here are the sub-mistakes of the main mistake:
Some of the reasons that cause people to fall into this mistake (based or influenced by the 80k section though not exactly how they say in all points):
1* Short-term thinking.
2* Not giving career choices enough thought (e.g. English PhD sounds nice so I’m just going to go with it).
3* Underestimating soft-skills: Not Investing in transferable skills that can be used in any job like
A- Deep Work by Calvin Newport. The example given in 80k career guide is writing your daily priorities. I would prefer something like "how to avoid distraction).
B-Learning how to learn
C- Rationality
4* Lack of awareness about automation threat.
5* Inaccurate predictions about one’s future interest/opportunities in the chosen career. (e.g) "The End of History Illusion":
So for example, for 5*, it could be the case that General reason 2 "unclear evidence" is implicated it could be (and I don't know it is) that in contrast to the "End of history Illusion", there is a group of personality theorists who claim that we under-estimate how stable our personality is. Or for 3*, general reason number 1 "trade-offs" is implicated. For example (and again I don’t know), it could be the case that the more you focus on developing general skills like "learning how to learn", you become less competitive in non-transferable technical skills because you have less time to focus on that now.
Hi,
I'm preparing a PhD proposal to study an egocentric network in primary health care setting in a low-middle income country feed with a name-generator survey. I have 2 questions:
1. Any suggestion to the minimum sampling size to keep the validity?
2. What is the estimated time to do an (egocentric) social network analysis of a sample size of X?
Any suggestions, references?
Many thanks!
Virginia
By analysis we mean that we are studying existing algos seeing their features applications, performance analysis, performance measurement, studying their complexity and improving them.
What would be the generation time of E. coli growing at 37 degree C in complex medium?
It takes 40 minutes for a typical E. coli cell to
completely replicate its chromosome. Simultaneous
to the ongoing replication, 20 minutes of a fresh
round of replication is completed before the cell
divides. What would be the generation time of
E. coli growing at 37 degree C in complex medium?
A.20 min B.40 min
C.60 min D.30 min
I have built the neurites using the option for the 'Run' dropdown option.
I have tried to 'set the order' of neurites (as 0 , with propagate) and used the neuite tracer to set the soma (this appears red).
I have also tried to ''set 1-selected as origin' from the Edit function dropdown but it still says there are mutiple origins in the model.
Dear friends and professors, I have a question. In fact, this is a question asked by a reviewer, and I have to address his/her concern.
I have coded a MIP model in the GAMS, and applied the Cplex as an exact method to solve instances. I am able to solve a relatively large-scale instance with about 23,000 decision variables and 4,000 constraints in less than a minute. I would like to ask for your clarification that how it would be possible to solve such a large-scale instance in less than one minute? Is there any specific reason?
His/her concern: "The model is really complex with thousands of constraints. I have difficulty in imaging that the exact method needs few seconds..."
I want to express my appreciation for your efforts in advance. In addition, I want to thank you for the privilege of your time.
I’m currently involved in a research project that is related to Highly Integrative Questions (HIQ’s).
To define the landscape of those "next level client questions" we initiated a research:
How to define HIQ’s?
How to approach HIQ's?
What are cases that relate to HIQ’s?
How can we learn from those cases?
What kind of guidance and facilitation are needed in the process?
Some buzzwords: Complexity Theory, Integrative Thinking, Social Innovations
In the reaction mixture, How to remove the excess addition of triethylamine?
If gold iii solution is preferred then how will prepare 0.05M solution in 30ml
We are trying to measure the concentration of Fe(III)EDTA complex which is converted to Fe(II)EDTA complex after a reaction. So, by this measurement, we will be able to calculate the extent of the reaction.
I want to know that how to search the paper about the algorithm complexity. such as Genetic Programming, Kalman filter, Particle filter and so on. I can't search the paper about it. Thx.
Thorin (IUPAC: Disodium 3-hydroxy-4-[(2-arsonophenyl)diazenyl]naphthalene-2,7-disulfonate)
When I want measured emission wavelength of solution of molecule by spectrofluorometer, that need inter value of excitation wavelength .
Now, How I can determine and measured excitation wavelength of solution of complex or molecule?
I need it for a managing complex events for a transportation project
And it is becomming more and more complex with time.
Soil as a reference is very complex and need to be simplified for experimental purposes. What could be a true substitute for this soil medium.
Human thought works in a similar way wherever it is
It seems that human brain uses hierarchy, between other tools, to reduce complexity when analyzing information. What are the neurological basis of this fact?
Hi, all, i want to get single crystal for my Ru(II) complexes. I really want to know how ? for this type of complexes
Hi everyone,
I´m looking for an experimental task which can be altered for two conditions: High vs. low complexity/difficulty.
Paper-pencil as well as electronic is possible (ideally it is free of charge and easy to implement).
I´m looking forward to your suggestions and thank you all so much in advance!
Cheers,
mp
Hi,
A treble quantum system is a complex case, I have worked on such system but still I need more opinions and I want to share my acknowledgments with any interesting researchers to strength our back grounds in this field.
Dear colleagues,
It has been proposed that no single statistical measure can be used to assess the complexity of physiologic systems. Furthermore, it has been demonstrated that many entropy-based measures are "regularity" statistics, not direct indexes of physiologic "complexity". Finally, it has been stressed that increased/decreased irregularity does not imply increased/decreased physiologic complexity. However, it is common to keep finding interpretations such as, "a decreased/increased entropy value of beat-to-beat interval series (RR sequences) reflects a decreased/increased complexity of heart rate variability", and even more as "this reflects a decreased/increased complexity of the cardiovascular control". So, which entropy-based measures actually quantifies time series complexity? Moreover, is it appropriate to interpret that because of a decreased/increased complexity in heart rate variability there is a decreased/increased complexity of the cardiovascular control?
Thanks in advance,
Claudia
I am calculating average binding energy for protein-protein complex using g_mmpbsa.
I have used 0ns to 10ns and 10ns to 20ns and 500frames. I have got different average binding energy for both case. and the different is really significant.
i.e. how to know that which particular algorithm suits the given problem to get the exact optimal solution with minimal time complexity
Suggest T-symmetry Hamiltonians having real spectra .
In terms of Algorithm can anybody tell me about the significance of both runtime complexity.
Complex 1 is [Cu(ip)2H2O](ClO4)2
Complex 2 is [Cu(dppz)2H2O](ClO4)2
Based on absorption spectral techniques i have studied binding ability of these complexes with DNA. A reviewer has raised a question as why complex one shows two absorption peak and complex 2 shows one peak.
kindly help me to get the answer.
Can anyone explain me what could had happened according to my above points.
Hello Sir,
- How to find out Maintainability Index, Cyclomatic Complexity, DIT, through open source data ?
- I have done this work but i couldn't find Maintainability Index .What tool should I use to calculate Maintainability Index,Cyclomatic Complexity?
Suppose, I've to solve a problem which consists of 2 NP-Complete problems.
Now I would like to know what will be the complexity class for the new problem?
Can any one suggest me any paper regarding this topic?
Thank you in advance.
Hi, literature have shown using CVF, We are about to measure Managerial Complexity (Lawrence et al., 2009)
Any instrument for measuring employee's complexity?
Dear all,
Where can I find the complex refractive index of Ge and Te in Terahertz (THZ) region like (0.1 to 2.5 THZ)?
Thanks.
- Is constant of integration must be real?
- Suppose the constant of integration is complex then what happened?
- To next question, please see the file attachment
Dear researchers,
It is an usual trend that bimetallic complexes showed good cytotoxicity compared to the monometallic analogs. If trend reverse means what may be possible reasons.
Hello,
I want to know what is the computational complexity of RBF SVM is it O(n^2) or O(n^3) and what is n here? Is it number of training data?
Thanks
In what sense is "irreducibly complex" synonymous with "NP-complete"?
In what sense is "complicated" just "computationally-intensive but solvable"?
- If different different uml class diagram and the size metrics and complexity?Which type of conclusion we will reached?
As, to determine the kf, we can calculate it from "half-wave" potential, but in this case we have peak potential (Epa) because the reaction is irreversible. Can we assume the Epa as half-wave potential ???
I want to cleave tertiary amine and borane complex
benzoyl thiourea ligands +Cu(II) perchlorate ------ Cu(I) complex
How was Bactrocera invadens discriminated from its sister taxa in the B dorsalis complex? Was it properly described or done?
-I'm trying to synthesis organometalic complexes between [1,2 bis( diphenylphosphino) ethylene] with (Mo,Cr,W) .please can any one help me to know how these complexes can be prepared using nanomethods ,and you may recomend me some paper in this field!
I am going to do a research about Aphelinus, but I can't find keys of America. I have keys about the complex and some countries like India, Egypt and Israel.
Would it be a combination of more than one machine or just one?
Which would be most efficient?
Is it true that any monic polynomial of degree 6 with complex coefficients can be represented as a sum of at most three summands each being the cube of at most quadratic polynomials?
B.Shapiro/Stockholm/
Say we have a complex network made of n sub-networks and m nodes. Some of the sub-networks share some of the m nodes. Say that such complex network (aka Interdependent Network) is under attack, and say that this attack is not targeted (e.g., does not look for high degree nodes only) nor random, but spatial (both low degree and high degree nodes are being removed). Now, say that the failure cause is external, in addition to being spatial, and that it can feature many levels of spatial extent. Hence, the higher the level, the higher the number of nodes involved, the higher the disruption (theoretically). My problem relates to the failure threshold qc (the minimum size of the disrupting event that is capable of producing 0 active nodes after the attack).
My question: does the failure threshold qc depend on how nodes are connected only (e.g., the qc is an intrinsic feature of the network)? Or is it a function of how vast the spatial attack is? Or does it depend on both?
Thank you very much to all of you.
Francesco
One of the interesting classes is class of quotients of complex polynomials. Is this class dense in C(S^2, S^2) in compact open topology?
There are plenty of other reference materials posted in my questions and publications on ResearchGate. I think it is not enough for someone to claim that the sequences I've found are pseudo-random, as their suggestion of a satisfying answer to the question here posed.
If indeed complexity of a sequence is reflected within the algorithm that generates the sequence, then we should not find that high-entropy sequences are easily described by algorithmic means.
I have a very well defined counter example.
Indeed, so strong is the example that the sound challenge is for another researcher to show how one of my maximally disordered sequences can be altered such that the corresponding change to measured entropy is a positive, non-zero value.
Recall that with arbitrarily large hardware registers and an arbitrarily large memory, our algorithm will generate arbitrarily large, maximally disordered digit sequences; no algorithm changes are required. It is a truly simple algorithm that generates sequences that always yield a measure of maximal entropy.
In what way are these results at odds, or in keeping, with Kolmogorov?
Has anyone tried geometry optimization in Unigraphics NX? If yes could share the complexity you have tried and has it fetched you any good results?
Thanks for your time.
I am using A* algorithm in my research work. Please suggest some research paper or article which prove the A* algorithm complexity.
Hi all
I know this replacement is because of the complexity in spiking neurons computations and because many supervised learning algorithms use gradient based methods, then its difficult to use such a complex models for neurons. Here I have two questions:
1) If we use a simple model (like Izhikevich model), then do we have to use such substitution too?
2) Is this replacement just for supervised learning algorithms? or in unsupervisedis it also necessary? Considering in unsupervised there is no gradient and back-propagation (If I think right!!!)
please help me.
Anybody with a good open-source remeshing software? The Meshlab recipe is eminently complex and doesn't work very well. Magics remeshing functionality is disabled for now. Hypermesh is not open source. Looking for alternatives...
Hello,
Does anyone knows what does w[1]-hard means in the context of parametrized complexity?
I am trying to inhibit the APC in 3rd instar larval Neuroblasts, but conventional APC inhibition with MG132 does not seem to work... Any suggestions would be highly appreciated.
How can I remove unreacted 1-5 diphenyl carbazide? Would it also dissolve in most solvents?.
If right hand limit and left hand limit exist, that means the limit 'exists'.
But, this is wrong.... Why??
Even in complex case, lim (z->0), (x^2y)/(x^4+y^2) does not exist... coz depending on the path, the values of limit changes.
Why does that mean the limit does not exist?
How to get stable Sr-Xylenol Orange complex.
Regards,
Rohit Dave