Science topic
Cognitive Development - Science topic
Explore the latest questions and answers in Cognitive Development, and find Cognitive Development experts.
Questions related to Cognitive Development
I am interested in exploring the cognitive benefits or challenges that young children may experience as a result of being exposed to multiple languages in early education settings. This question aims to investigate whether multilingualism in preschools and kindergartens contributes positively to cognitive domains such as problem-solving, attention control, and memory, or if it presents any significant cognitive load that affects learning. Understanding this impact could help in designing more effective educational strategies for multilingual environments.
I'm exploring how cognitive development theories guide language instruction design. How do these theories help educators adapt teaching methods to match learners' cognitive abilities at various stages? Practical examples and theoretical insights are appreciated.
I think by merging AI's personalized learning capabilities with the interactive, community-oriented features of social media, language learning can become a more dynamic, engaging, and effective process, deeply impacting both the cognitive development and social integration of learners.
Are you a full-fledged empiricist and see a totally empirical Psychology?
Maybe if you don't see that you will after reading about 1000 pages of my writings :
Ethogram Theory and the Theories of Copernicus "et al" : beyond analogy, but a real similarity
Back in the 1500s, Copernicus "stepped back" and looked at more and more carefully. He gave us a reason to think that, indeed, everything does NOT revolve around the Earth.
In the next century, Galileo Galilei and Keplar gave us more reasons to think this way. Keplar described orbits of the planets as elliptical and Galileo showed that OTHER non-Earth objects had things going around them (e.g. Saturn -- the moons). Finally, with Newton's work, the orbits of the planets were mathematically described.
Now, I firmly think Ethogram Theory is more than an analogy to that above, but has REAL similarity. Ethogram Theory "steps back" and looks at more (and more carefully as well). Ethogram Theory looks at cognitive development in a way like Piaget, but Piaget's theory is merely just descriptive and puts forward nothing like proximate causes; thus, in a way Ethogram Theory, with regard to Piaget's particular theory, is only an analogy to Piaget's, with Ethogram Theory empirical and totally investigateable ; the weakness is not with Ethogram Theory but with Piaget's. Ethogram Theory, like Piaget's , reckons cognitive development as central to most major developments in Psychology. Ethogram Theory yet sees way to see similar stages, not only with Piaget's. but phenomenology described by other major stage theorists. Some of these stage theories, Piaget's in particular, actually have good evidence of universality among peoples (despite being only descriptive); such is seen in all cultures tested. But, by being just descriptive, Piaget doesn't NOT even point us at proximate causes, AND to totally empirical things that could be empirically investigated -- exactly verified or amended, totally INVESTIGATABLE with modern eye-tracking technology.
This is what Ethogram Theory does. If you are familiar with Ethogram Theory, indeed : material, empirical, actual, directly observable phenomenon are cited for the cognitive stage transitions. These are perceptual shifts, often attentional/perceptual shifts (in what the subject looks at, and seeks to see better and more of).
I would argue that something like these shifts is necessary. Nothing except something like Ethogram Theory stages, points clearly to anything fully empirical.
Finally : The productive thinking about Ethogram Theory would be BY FAR mainly inductive processes. And, in fact, inductive processes ARE the very main way [ at least ] ALL other mammals process information and learn. I firmly think that the major types of learning in humans are via such inductive processes, in both child and adult -- for most processing of information both for advanced scientists and babies. [ There are qualitatively different types of inductive learning, varying with the stages. ]
I am going downhill hard and fast (related to age and me); I would guess this is my last post.
I would like to ask if there are any studies focusing on cognitive developmental issues or mental health issues in the primary immune patient population.
If so, are there teams working on this right now?
A growing body of contemporary Children’s literature and cartoon Films depict Gender dysphoria themes featuring characters based on politicised, Constructivist redefinitions of gender and family. Promotion of a 'Gender Fluid' lifestyle has lead children down a destructive path of Gender confusion, chemical abuse, surgical genital mutilation and suicides. Do the growing Gender Dysphoria themes in post-modern children’s Literature have a causal relationship with the mainstreaming of Transgenderism?
Kindly point me to any studies which explore this / related issues.
TIA
How are "levels" of thought or processing validly seen as hierarchical? This turns out to be a very basic and important question, BECAUSE most often behavior Researcher(s) decide what is at one "level" and what is involved with another "level" and a [supposed] relationship is seen that is thought to be hierarchical (one level using the previous ones (which is fine and good), <- BUT all these "levels" are also seen subjectively). This is a damned poor way of classifying, if [supposedly] for science purposes: it is quite arbitrary and subjective (and task dependent). WHAT'S THE ANSWER?
For those who understand Piaget, the better Answer for what are hierarchical "levels" is: there is a hierarchy developing/unfolding/emerging where qualitative (big differences) in processing occur AND .... This also clearly indicates the Subject 'sees' differently .... The only strictly empirical way to account for all this is that a new "level" involves seeing more or different things or significantly seeing certain things ANEW (in a different way); all those possibilities, in Ethogram Theory, are explained by perceptual shifts (at the beginnings or inceptions of a new level). AND: This also more than strongly indicates that at each new level MORE types of objects/actions are involved.
THUS, for there to be a true empirical hierarchy, SOMETHING (_OR_ type of thing) NOT PRESENT BEFORE IS ADDED (in an objectively verifiable way).
Those who "define" hierarchies without this requirement have lost touch with empirical grounding and have lost touch with science itself. (In Psychology science (like with other real sciences): The SUBJECT, specifically BEHAVIOR PATTERNS, define ALL !; the Researcher(s) merely using his/their own imaginative thought/"analysis" DEFINES NOTHING. Try to remember that the organism, in all aspects of its behaving (including behavior (behavior patterns themselves, per se)) IS ORGANISMIC; if this does not "show", then you are off track and almost certainly in a way that will NOT SELF-CORRECT (as good science does).)
All the above is very much related to questions of concepts being concrete or "abstract" (INTEGRAL to the issue , in fact); AND, not understanding true ontogeny (cognitive development in childhood) leaves "levels of abstraction" in confusion (a pseudo-mystery, seen generously as simply [supposedly] a mystery .)
"Behavioral 'science'" offers close to nothing for Artificial General Intelligence (& I believe eventually any good influences might well be FROM AGI to Psychology). One quite possible example:
My guidance for behavior science, even if not verified OR falsified by Cognitive Psychology "folks" (because they are stuck in non-rationally-justified RUTS), could just be "aped" (that is, guessed at) and improve AGI (and progressively more and more, even by trial-and-error). THEN, instead of AGI looking to Psychology, rather, as in the past with ACT* (information processing science), Psychology could learn a LOT from AGI .
My way for better Psychology is self-guiding emergent ways (self generative processes -- which are some quite possibly clear things (with KEY overt manifestations, that unfold with ontogeny -- initially perceptual/attentional phenomenon). I would look for such for Psychology as a Cognitive Developmental Psychology person, but I am old and retired.
It seems obvious to me that this is exactly what Artificial General Intelligence NEEDS -- one clear thing: self generative processes with AGI ontogeny (emergent, unfolding processes at proper points). Intelligent things show creative self-guidance ...
Is there reason to believe that data, available or possible, from eye tracking is far greater than what is utilized? YES ! :
Computer scientists tell us that ANY similar or exact patterning of visual perception or attention, with _ANY_ overt manifestations, can be captured. Unquestionably much develops from input through the eyes (the MAJOR example: ontogeny); plus, behavior IS PATTERNED (as would be true for any significant biologically-based functioning (and ALL behavior is)). AND, ALL such could/can be found/identified using eye tracking and computer assisted analysis. ANY/ALL. Thus, it would be useful for psychology to capture any/all such. (It would be more constructive to start with analysis including most-all subtle behavior patterns; that avoids at least most unfounded a priori assumptions (actually: presumptions).)
Unlike modern assumptions, little is likely just random; and YET ALSO, for-sure, little is just statistical. (Nature doesn't play dice.)
True, this is self-serving (for me, for my definitely empirical theory) BUT IT IS ALSO TRUE.
I'm researching about the correlation between cognitive development, brain development and capacity of sexual consent in teenagers and, comparatively young adults, and how it affects teenage development; also how power dynamics in teen-adult relationships affect sexual consent, so if anyone has recommendations of sources and such, that would be vey helpful!
I'm trying to find a topic for my graduation thesis and I'm looking for experiences, materials and papers.
What I want to explore is effects of boardgaming on:
problem solving skill (adults and kids),
emotions management (mostly on kids and teens I suppose)
Cognitive Development in the Teen Years.
Have somebody direct experiences? Experimental data? Published studies or preprints?
Re: It seems a major sort of addition needs to be made to cognitive-developmental ontogeny theory (Ethogram Theory)
I have been out just to describe the developing very early processing and all the later hierarchical developments and processing, yielding the development and the progressing of the [grand/always-important] "outer container" (cognition). These are the levels of/stages of cognitive abilities being most of, and what's central to, guiding behavior: cognition, representation, abstract concepts and thinking, and actions. I NOW do believe something more is involved than I have yet ever indicated (something I avoided). For years and for decades:
I almost perhaps incredulously spoke nothing of emotions. Now I do; BUT, reservedly: I want to "add-in" and speak of just basic, early-on emotions that may be central to ALL cognitive development, per se: in particular it is those that are likely necessary to transfer a level of representation and thinking abilities from one domain (once established in an early domain) to another domain (this is sometimes known as transfer, sometimes as generalization -- neither which captures all that goes on with true hierarchical development with ontogeny).
I have long sought to make emotions (relatively simple response PATTERNS) something that can simply be added-in ("tacked on"), AFTER cognitive ontogenies are under way (which seemed esp. good for AL /AGI). But, the problem of humans (as well for AI / AGI) going from using a level of skills somewhere at first and THEN going from one domain to other domains for a new same sort of transformation THERE, i.e. to a essentially new similar level/stage of which he/she is capable THERE, has remained unclear. This matter is now, in much of mainstream psychology, explained hypothetically (or supposedly) based on obvious/common-sense contingencies of guidance (from others and language) _OR_ as using analogies or metaphor to find the similar structures (alignments) in the new domain. This does not often seem plausible and is not sufficient for the broad and quite precise applications for a new level of thinking. (It is too crude and contains irrelevancies.)
FINALLY NOW, I thought of my likely neglect in not providing sufficient impetus or motivation OR direction (or "self"-reward) for ontogenic shifts (at inception: BASIC perceptual shifts), then changes. Early on, and then later, given the representational context of past key developments:
Maybe SOME key emotions help direct the organism to take a closer look at things, actions, and events and with the simple general sorts of motivations GIVEN BY SOME truly basic emotions; if there is more "dwell time" and the organism will take a closer look, THEN he/she will find more, and develop a similar system of structure and understanding THERE (as well as in contexts where such a system was applied earlier).
For, after all, a number of notable emotions have been with us sentient beings since mammals and birds (evolutionarily speaking). Not using any, even for the development of the grand "outer" container no longer seems possible. They (some emotions) are there, and, if they give direction and impetus, why wouldn't the be used in cognitive stages key unfoldings (and making them more precise and reliable). These few particularly important emotions are THERE basically from birth. For me, now, NOT making use of a small set of basic emotions aiding cognitive development does not seem adaptationally likely OR even plausible (from the point of view of logic and soundness, as well as evolutionarily). The set of such basic emotions for cognition and cognitive ontogeny (throughout), i.e. for all major cognitive developments, can be likely understood as interest-excitement-anticipation and surprise and joy. (The combination, in the first 'hyphenated term' are in part(s) present in all modern theories of the basic emotions, while the last two are IN ALL such systems of understanding.) In short such emotions ARE THERE to provide major motivations to dwell on aspects of things, circumstances, and situations -- even situations, in later ontogeny, very much spanning instances (situations/circumstances) across times and space -- AND also facilitating the basic associative learnings -- so things "carry on".
Some present proposals which put forth that for "generalization" or "transfer" metaphors and/or analogies doing the bridging just do not work for me. This brings in irrelevant distraction elements and does not give you the needed precision or focus on new things or things seen-anew. Analogies and metaphors WITHIN a single stage may be helpful to the degree workable and appropriate in more minor learning regards.
If we cannot come to actually see ourselves as a species among other similarly biological-behavioral species, can we really accomplish anything? I say NO -- not anything significant involving ourselves AS A TOPIC OR AGENT. And, I am talking about seeing our OVERT behavior patterns and overt observable foundational behavior PATTERNS, as BIOLOGICAL FUNCTIONING -- this is to say: behavior PATTERNS , [ that is " 'just' behaviors" (in common parlance) ] , AS biological patterning ITSELF PER SE. Though already many realize this must be true, with behavior having to be a true aspect of biological/organismic functioning even onto itself, YET BECAUSE we DO NOT KNOW HOW TO SEE THIS, we are "sunk". Only recently have I come to realize how important my guide to Psychological (behavioral) science is.
If we cannot reach this better point (indicated), we will not see anything involving our responsibilities in any complete or sustained way AS IT REALLY IS: Needless questions and needless superstitions will necessarily and irreparably confuse us. AND: We may not know this because, very largely unbeknownst to us, Psychology as a science has not yet started -- though in ways this is easy to see if you look for any true and meaningful talk of strict empirically-established behavior PATTERNS (actual discovered-through-key-observations-REAL, actual OVERT PATTERNS (and patterning of patterns, etc)). AGAIN, only recently have I come to realize how important my guide to Psychological (behavioral) science is; I used to say "let's worry about climate change foremost", but now I realize that US thinking about most anything very important well (or behaving in any significant continuously disciplined manner) IS VERY, VERY LIKELY CONTINGENT ON US BEING ABLE TO PUT OURSELVES IN PROPER PERSPECTIVE AND CONTEXT; without true knowledge of true foundations we are terribly weak-of-mind (the nature of the problems here just indicated).
[ In line with the views above, I have sought to UN-FOLLOW many poor and useless Questions -- ones that, nonetheless, go on and on (even for years). I do not wish to in any way encourage these. ]
We have been witnessing lot of discussion on outcome based education and emphasis on cognitive development and assessment of learning as per Bloom's taxonomy.
We just wonder about the excellent teachers we had and many of their students excelled in their profession without going through such type of education and testing process.
Do we have a need to change our past system by more rigorous and demanding exercise of outcome based education?
I'm looking to expand my sources on the topic
The question I really wanted to "kick off" this thread:
Why would local (times/spaces) -- any number considered singly (or reflected on afterward and/or considered together in ways -- but still as they were, singly) -- ever to be thought to show what we ARE in terms of the Biology of Behavior?
One should not have such poorly contextualized thoughts but, as I will indicate, this is the nature of a lot of recognized and long-standing philosophy. Typical philosophy, not thoroughly guided by science.
I shall try to indicate how such normal experience could/should NOT be likely to reveal most-key behavioral development -- the core biological functioning of behavior.
[ FOR THIS ESSAY: Think in terms that philosophers most often think in, and a major and central kind of behavior psychologists think about: thinking itself; and, think of that specifically AS IT ADVANCES IN MAJOR WAYS, and thus specially in qualitative shifts leading to significant new ways to imagine and conceptualize. ]
The beginning question (at the top of the body of this essay) is basically to ask: can we conjure up the very nature of a major biological system, THAT BEING THE BIOLOGICAL SYSTEM OF OUR OVERT BEHAVIOR PATTERNS (as it unfolds with ontogeny)? Can we do this just by "force of will" or strong intent, finding exactly that which is key in experience (during ontogeny/development) as it emerges? I say, no. That would not be well-adaptive, for one thing; we don't want to rely on OUR precision, but rather our "body's" ability to HAVE precision: somehow "in" developing some CORE (key aspects) of behavior patterns which, specifically, are the core of new qualitative ways of thinking . Such important new aspects are likely possible because of some added precision (true discriminativeness and realized similarities) "reflected" in some memory capacities, as knowledge develops (or, more accurately, HAS developed). AND, THEN, as we, with our capacities are exposed to "more" , in key important situations/circumstances, those faculties 'see' more (we would say, in today's psychology terms: “more enters working memory”).
How have Western philosophers done on such matters? How have they addressed this?
Western philosophy: how could one criticize this? Here's a major general way: A major topic and abiding concern in that field is about thought, esp. thought about thought; but, this and other matters pondered, are characterized by precisely the LIMITED phenomenology of OUR thinking (and just what-all that does), AS DONE, IN EFFECT, "LOCALLY".
But what's the problem? What else do we have? Oh, the woe of those who do not know:
We have good knowledge of the nature of, AND limitations of, some central faculties (the Memories) -- good science data here; considering THAT, we have the ability to compare situations/responses looking for cross-situational/circumstances differences and cross-situational/circumstances similarities WITH THAT KNOWLEDGE AND PERSPECTIVE GUIDING US. This is NOW NOT the phenomenology of raw experience, though it is clearly related to such experience -- and MUST be related to such experiences -- but now to "track" or go "beyond" the phenomenology of local (times/spaces) experience. This gives us a way, and a legitimate way if we are fully empirically grounded (and know how to stay that way), to detect changes, NOT JUST those DUE TO regular ("local") experiences, but others related to, or due to, other behavior pattern changing, indicated by "clues" through/by/with our knowledge.
Why might this be important? Because: what we ARE, in/with our behavior patterns, may well be beyond any particular experiences AS WE ACTUALLY EXPERIENCE THEM -- beyond the regular (ordinary, usual, normal) PARTICULAR local experiences. Sound strange?; it's not. Ask yourself:
Is there any reason we should expect that we are so smart that we can actually see or detect the ultimate mechanisms of the biology of behavior? I think NOT. But, with our abstracting, reflective abilities and good knowledge of major faculties/capacities (and of changes in the content, and in the organization, that occur there), we can get an idea of what species-typical or species-specific qualitative changes might well occur over ontogeny AT KEY POINTS.
That way, we can ask: what sort of changes in behavior patterns (think of: changes in thinking) are in accord with biological principles and consistent with the way biology is (or may be), AS IT COULD OPERATE, and those maybe contributing to aspects of behavior that WE, AS SENTIENT BEINGS, CANNOT DIRECTLY (wholely-as-it-is-relevant) "fully" experience, in our normal ways. YET I assert also, that the biology of behavior CAN be realized INDIRECTLY by making differentiations and comparisons across key circumstances (of thought -- when the topic is cognitive development, as it is here), SOMEHOW using what we do already know (from behavioral science, and often NOT from normal experience). If all is done in a correct way, we will generate the testable empirical hypotheses.
Though the whole phenomenon (that is, all aspects) of qualitative change may not all be something we experience explicitly (or, at least, as something that seems at all notable in thought), we could hypothesize mechanisms of the qualitative change in some of these very aspects of overt behavior . Again, these not fully obvious or obvious for what-they-are because some key aspects of the qualitative developments of thinking are not directly obvious that way (in regular experience): these are likely exactly some of (or some aspects of) those behavior patterns AT THE INCEPTION of the “new” which is central to and resulting in NEW developments and new cognitive abilities. THEN, the question should be: what aspects of behavior patterns could be involved which may well be sufficient but not disruptive?; are any of these not only overt, but detectable and in some way measurable, given our present technological prowess? I say yes, yes. Specifically here, I assert: "Perceptual shifts", BEING the innate guidance, as aspects of important learning-related experiences (but not typical learning), may be there and suffice. [ These "perceptual shifts" could well be the development of "time-space-capacity availability" (i.e. basically "GAPS" of-a-nature in visual-spacial memory due to development , i.e. with the integrations and consolidations THAT come with development and HAVE ALREADY OCCURRED). ]
This would result in "looking" at key aspects/parts and CONTEXTS in new ways (new real concrete 'parts' of situations or combinations of 'parts' of real concrete situations). BUT: "looking at" does not likely or necessarily REQUIRE that this immediately results in “seeing more", but just sets up an orientation, used again (and again) in similar circumstances to see "the more", when there is "the more" to see and we are not to much otherwise occupied to see it. [ Here, the "looking at" I am talking about, may seem to be of the scientist who is doing the studying. Though this may be, in some senses, similar, this paragraph is describing the developing Subject, at major points in ontogeny. ]
About one engaged in good developmental psychology science: While our new way of thinking about things now can be, in a sense, of an "non-local" nature, the relevant aspects of the environment (circumstances) are never as such, but rather that which is with us (the Subject) and before us (the Subject) in the concrete real world: either as important context OR that important context with newly important content.
[ Do not be surprised to see edits to this essay for a while.]
P.S. The above is what I am all about. If you want large papers and hundreds of pages of essay, related to this, see:
and
I am doing research on the optimum age of learning a foreign language based on cognitive development. I analyzed several relevant articles and now I need to merge them, but the findings are complex and different in terms of style and length. How I could merge them to answer my research questions? Is there any strategy to do so?
Looking for original documents from work regarding cognitive development
Must be 2 different original primary sources.
If anyone could possibly help with this.
I would greatly appreciate it as i am having difficulty locating any originally written material regarding this matter.
With so many permutations of so many diverse "things": the only way to provide a general alternative better view AND APPROACH will be WITH a full-fledged paradigm shift:
What is offered must have a host of better characteristics and better ways, all related clearly to a better empiricism. [ SPECIFICALLY: I am speaking of/for PSYCHOLOGY -- the number of characters allowed in a title didn't allow for the inclusion of that full phrase (though the same type of thing may at times be required by other sciences) .]
A full-fledged PARADIGM CHANGE: Better assumptions; stricter & very established/agreeable and actual empiricism, well-defined, with a definition true for ALL sciences; better KEY BEHAVIORAL foundations/clear grounding (in terms of: behavior patterns) for all cognitive processes; clear NEW observations sought (i.e. major discoveries sought) VIA NEW observation methods; & with clear better-empirical verifiable/falsifiable HYPOTHESES . This is what I seek to offer with :
https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory-A-Full-Fledged-Paradigm-Shift-for-PSYCHOLOGY (see its major References and the Project Log (Updates) for this Project; the major References, hundreds of pages long, will provide you with a perspective and approach -- a "how-to" FOR all of that. Given its better empiricism, a concrete basis is also provided for General Artificial Intelligence -- all that is found and seen can be "mechanized", is programmable.)
[ This all is VERY serious "business"; it really is an all-or-nothing proposition. If you see major problems with large portions of Psychology throughout its history, you better "go with" what I present; otherwise the long-standing situation WILL remain the same; I think you may well be able to imagine how and why that could be true (all the various myths of how things [otherwise] could/will come together NOT WITHSTANDING -- these are true myths, not based on any empiricism). ]
Hello, I am a First year post-graduate student at university of Bolton. I want to do my dissertation about cognitive development in early adulthood. Can you please help me by giving me some info about your project? I will not share it with anyone. I just want to get some insights on this topic. Thank you for your help.
Basic Psychology Research Preparation: Isn't the time/space(place) to look for specifics __when one knows the time/space they are in? It is proper contextualization (with a correct process and order of exploration/DISCOVERY) that gives one the proper contexts. OTHERWISE: Disparate elements very well may not "put themselves together" (nor will they clearly or decisively indicate their own important fuller context(s) , and guessing will likely not work ; all this is very clear to me; I hope it is clear for you). Think about it; do you really only want to be finding "pieces of the elephant"?? (It surely is incorrect to not have fully considered the possibilities (actually, probabilities), just indicated: in fact, without proper contextualization what you do either verges on superstition, OR IS, superstition (NO MATTER HOW CAREFULLY and seemingly "systematically" YOU DO THINGS (it's just too much mechanically, after your presumptions)). YET SUCH superstitions is what your presumptions and poorly-founded/poorly-grounded "assumptions" and general perspectives now give you -- and that is not a good base from which to operate, unless you work for Descartes and cannot make a living without working for him and would starve if you didn't -- but then science would not be the cause, would it?).)
As I have said before, right now (at this time; at present), you certainly, in no agreeable or reliable or valid way, recognize behavior PATTERNS (and this is easy to see: because you think "divinely" in terms of "behaviors"[/"stimuli"] and the word 'PATTERN' either does not appear (which is by far the usual case) or that word does not have the needed meaning, agreeability or certainty of definitions) -- which IS NOT OK.
My work (as was Piaget's hope) provides a likely major outer "container" (context) -- and YET this 'broad-strokes' "thing", my theory, perspective, and approach, connects with you (in/at the kind of place YOU DO YOUR STUDIES, the "lab") and does so with concrete well-defined, specific testable hypotheses (with all terms strictly empirically grounded, AS IN ALL REAL SCIENCE).
You need to be able to face this; if still need be: challenge yourselves if you need to "see" this. For the organism, in reality: the actually used/meaningful/full involved "environment" IS NOT RIGHT BEFORE YOU (i.e. "before your eyes", as you just happen to look); AND what you DO look at is NOT READY TO BE EXPLORED successfully based merely and crudely on INTUITION and/or a priori models to supposedly find clear connections and systems (somehow fitting the your a priori models).
At this point I will wait for a sign that you can "handle" it. I have provided sufficient guidance to the 1000 pages of essays, doing all can. You may put questions here (under this Discussion), and IFF I feel there some clear indication that you have tried as you should and as you might, I might try (or try again) to provide guidance. BUT, at the very least: FINISH EVERYTHING FIRST. Lastly, for now: the key essential HINTS:
(1) DEFINE NOTHING FOR YOURSELF and LET NO ONE ELSE/ NOTHING ELSE DEFINE ANYTHING FOR YOU (with the "exception", noted next). In short, INSTEAD, COMPLETELY:
Count on the SUBJECT MATTER (JUST count ON all observations OR QUITE POSSIBLE OBSERVATIONS) for ALL PERSPECTIVE, ETC. AND FOR any further perspective or understandings needed, at this point, and for _____ALL____ fundamental understandings -- including FOR THE DEFINITIONS OF EVERYTHING: terms, perspective, concept-terms or ANY hint of a "model".
(2) Be VERY OPEN MINDED: JUST look to observations AND possible observations to "see" (you can only imagine that there will be patterns therein -- and I indicate the most likely). And count on NOTHING ELSE (anything and everything else you need should follow from that). ACCEPT EVERYTHING POSSIBLE AND/OR INDEED-POSSIBLE FROM THAT REALM, aka from Reality (sequential .phenomenology).
The Method noted under my Profile (and under Research) will not hurt and may help.
Why is there a bias against inductive reasoning and in favor of deductive reasoning in the social sciences?
First, to establish there IS a bias:
It is OFTEN said (really as if it were a defining [damning] condition) that : induction or inductive inference is "made under conditions of uncertainly". Then, in describing deductive reasoning/inference there is typically NO SUCH mention of uncertainty. What? Just because one (or one and her associates) comes up with a hypothetico-deductive system of thought _THAT_ SOMEHOW REMOVES UNCERTAINTY??? This is NONSENSE -- YET this [at least] is a very real AND DESTRUCTIVE "Western" bias: that when you develop some system to think with/in from WHATEVER actual data, then you, simply because you are now thinking in/with that internally consistent system, you will develop clear hypotheses _AND_ (as the bias goes) THESE WILL LIKELY BE TRUE (as shown via their "testing" -- and, no matter what standard of "testing" you have com up with). (Descartes would have loved this.)
Now look at some of the TRUTH that shows this is VERY, VERY likely an VERY unwarranted bias and it is quite conceivable that the opposite is true: Decent Induction shows more clarity, reliability, predictably, and inter-observer agreements THAN almost all deductive systems.
If in certain circumstances/situations a behavior PATTERN(s) which can be specified and has a directly observable basis, then induction can show GREAT inter-observer agreements _and_ this is sure-as-hell just as strong (actually, likely stronger) a result (reliable, agreeable result/finding (discovery)) than most any p<.05 results found when testing hypotheses that come out of a hypothetico-deductive system . All you jackasses that cannot think that way should establish a re-education camp FOR YOURSELVES or have nothing to do with science (other [real] scientists rightfully shun and ignore psychologists at any conference on science, for scientists in general: They sense OR know what I am saying.)
Yet, indeed, this very ridiculous bias leads people to come up with models where ALL concepts are NOT clearly rooted/beginning in directly observable overt behavior [PATTERNS] (I have even read one group of researchers, who wrote a paper on the difficulties of understanding ABSTRACT CONCEPTS, trying to "define" abstract concepts (and thinking) saying: "I think we should develop a thorough MODEL FIRST" (meaning: NOT after findings/data, but develop the model FIRST and, only then, look for the "behaviors". This is empirically unacceptable to an extreme. I believe such thinking would make Galileo throw up.) I have argued that a model cannot be good, unless ALL concepts ARE rooted/founded/based/stemming from directly observable overt behavior (again actually: behavior PATTERNS). The fact that so very, very little research is discussed, during the conception of a MODEL (OR afterward), in terms of behavior PATTERNS indicates an absolutely fatal problem (fatal to any hope for a science of Psychology). Still, today, Psychology is Medieval.
This "Western" society is presently (STILL) so sick (crazy -- like Descartes would likely be considered today) TO HAVE ANY POSSIBILITY TO HAVE A SCIENCE OF PSYCHOLOGY.. "Mere" BUT ABSOLUTELY ESSENTIAL OBSERVATIONS (and some associated discoveries) ARE NOT SOUGHT. (I believe if Galileo were here, he would say we have not yet made a decent start for a science of Psychology.)
What is true is that we will never, without proper bases and procedures, EVER understand important behavior patterns (and what aspects of circumstance(s) are related to them) EVER . (I shall not elaborate here, since so many want short answers (and ones damned close to those they have heard/"learned")).
Like other parts of my perspective and prescribed approach, this view is UNASSAILABLE !
Let my other thousand, or so, essays reinforce and trumpet what I have said here (they are all consistent with all my points and with each other, and these essays are here on RG).
P.S. Behavior patterns PER SE are an aspect of Biology, and very likely recognition and discovery of behavior patterns can ITSELF (alone) provide a full science. If you think of "Biology" always as something else then recall the re-education I have suggested.
Wouldn't experimental psychology (the "lab" setting) have a necessary bias AGAINST the existence and availability of some SKILLS & against any thinking of (across/about) multiple circumstances?
I contend: There are some skills developed (or discriminated) across circumstances or between circumstances, that develop over more time and/or more circumstances (usually both), than can be detected or manipulated in the "lab" (using presently used procedures, at least) . AND, there may well be thinking of concepts FORMED (naturalistically) ABOUT existing or not existing "things" AND/OR (also) relationships (relatedness (or NOT)) which involve mentally comparing [representations] between situations/circumstances that are very important in REAL, ACTUAL conceptualizations and thinking (in real "internal" phenomenology -- though based on ACTUAL EXTERNAL SITUATIONS/CIRCUMSTANCES that could be seen if OBSERVATIONS were more realistic __and__ [(relatedly)] imagination about imagination was more reasonably thorough). WE CANNOT SEE THIS (presently); we may NOT MANIPULATE THIS action by the organism IN THE LAB.
There is no doubt we (including AT LEAST even older children) must, can, and do these things BUT WE CANNOT DETECT (measure)(yet, at present) any KEY behavior patterns related to such activities AND we cannot, and will not be able to, fundamentally manipulate such activities.
It is quite possible (if not likely): MOST HUMAN THOUGHT, realistically OR naturalistically considered, IS THEREFORE IS NOT THUS CONSIDERED (at all, or at all realistically) IN THE "LAB". (Thus, the existence of the homuculus (or humuculi) of executive control and all the "meta" this-es or "meta" thats -- NEITHER strange type of concept NECESSARY IN ETHOGRAM THEORY.)
This IS NOT A LIMITATION OF SCIENCE or OBSERVATION, but a limitation of the lab and of typical experimental psychology.
Based on testable particular hypotheses from Ethogram Theory:
I should add that [still], based on the nature of the Memories, at least THE INCEPTION of each new qualitatively different level/stage of cognition would occur at some KEY times and "places" "locally" in circumstances, i.e. could be seen within the time/space frame of the lab: AS DIRECTLY OBSERVABLE OVERT BEHAVIOR PATTERNS -- and these discoveries, by using new sophisticated eye tracking (and, perhaps, computer-assisted analysis) technologies (<-- these basically being our "microscope"). BUT, you would have to know what to look for in what sort of settings _AND_ (at the same time) be able to recognize the KEY junctures in ontogeny and the development of learnings that THESE shifts (starting as very basic and essential "perceptual shifts"; then becoming perceptual/attentional shifts) WOULD OCCUR.
Yes, less reading, if students of Psychology simply require that ALL that they bother to read/read-about in this "discipline" should be that where behaviors are discussed clearly, and in clear terms OF BEHAVIOR PATTERNS -- and basically just THAT (such patternings) BEING [ substantially ] ALL. (History will show that the rest -- unless you are an advertiser, or maybe a "social psychologist" -- will go down the toilet.)
P.S. "Models" not integrally based-on and/or directly facilitating the seeing-of the ACTUAL BEHAVIOR PATTERNS DO "NOT COUNT". "Mechanisms" not integrally of a biological (and behavior patterned) nature DO "NOT COUNT". You may skip those.
Hello, Researchers,
Is anyone studying the costs and benefits of "meme trading" on social media? Is the effect more negative? That is, causing fewer neural connections to be formed because passing on "junk memes" is such a low cognitive-level activity. OR is there some burst of neural development that comes from passing on many slogans and posters?
I wonder if the cognitive developmental of Facebook and other social media "memes" are being studied.
Older adults tend to use "boilerplate" language when they tell stories. Key words will trigger a story told with identical phrases. These older adults are not creating new neural connections, or very few, when they repeat the same phrases. Do memes function this way and what are their effects on much younger people?
If you use anything of the Memes rule the World dataset slide show attached below, I will appreciate a citation. This is MLA format.
McMillan, Gloria. "Effects of Facebook Memes on the Brain." ResearchGate Accessed 28 Jul 2018. https://www.researchgate.net/post/Effects_of_Facebook_Memes_on_the_brain
Isn't grounding all interactions (& our understanding of particular interaction) best done by better understanding the Memories AS (being) EXPERIENCE ITSELF? I see this as one of the 2 consistent common groundings for properly coming to an understanding of concepts we come to have as a being, and this includes the development of not just bare simple concepts, but even the development of contingent SETS of such concepts, AND it includes that which come of the developed and developing Memories which allows for abstract thinking -- abstract concepts and abstract processing. Let me elaborate on this first type of thing:
First, realize: By the definitions of the Memories (our basic types of memory, all rather well defined by EXISTING research already), there is no way not to see EXPERIENCE as the operation of the Memories themselves (and THAT is EXPERIENCE ITSELF, literally true BY THE DEFINITIONS in modern perspectives and research). AND, CONCEPTS MUST BE ALL BASED ON THIS. Thus as experiences "grow" and as application of our concepts (defined by interaction with environments: social and/or otherwise, linguistic and/or otherwise) become (to the extent that they can) more widely seen as relevant and applied, this simply occurs by way of the simple forms of associative learning (the definition of such FORMS something that can be well agreed on); NOTE: All this eventually will only suffice WITH the second set of required groundings "emerging" for prompting MAJOR developments in ontogeny (see below) -- those influencing attention and learnings A LOT. Yet simple associative learnings seem to partly work (for a lot of the more bit-by-bit development) given evidence OF the existence of concepts/representations/ways-of-looking in the first place (such as its there, at least at later levels of child development). _AND_ these very simple associative learnings are ALL that would needed at the major points in development, in addition to the base perceptual/attentional shifts (described below). In a sense, yet still, they will be THEN AND THERE all that's needed -- those simple learnings STILL being ALL of what's necessary to "put things together" even WHEN THE SECOND SET/TYPE OF MAJOR FACTOR IS FOUND AND SEEN (and as and when such shifts are occurring). Yet, so far (i.e. the above) would not provide a complete picture of human learning and development . AT BEST, the Memories as they are at any point and associative learnings are still just "half" the picture (as already has been indicated). BUT: What's the other "half", at least more specifically/functionally? :
These other major necessary factors are basically the capacities (or capacities within capacities, if you like) developing with very subtle innate guidances (which are not-unlikely and certainly possibly, at least for a time, quite situation-dependent); these, of course, leading to some of the most major developments of the Memories and HERE, of qualitatively new learnings (still combining with the "THE knowns" and with each other JUST THROUGH THE SIMPLE ASSOCIATIVE LEARNINGS). These innate guidances are at first just sensing more: THAT OF _THAT_ which is _THERE _IN_ any given concretely definable situation (where more adaptation is needed). This is reliant upon and given also the way our Memories have already developed (given our past learning, and earlier innate guidances, the products of which have become well-applied and consolidated (etc.) and all which yields "the time(s)" for some new types of learning) . And now (from the good processing and consolidation ; and discriminations here, perhaps just associative learning as dis-associations) giving us, in a sense, a new or greater capacity in working memory (through more efficient "chunks" and/or some situations-specific "trimming" of the old chunks, and both WITH CHANGES IN OUR _WAY_ OF CHUNKING (and realize: this may not preclude other adaptive reasons for an adaptive increase in the effective capacity of working memory (WM)). The details of the nature of the periodic innate guidances:
What is newly, or at least now truly sensed, sensed as "the-more": that is sensed (and at least glanced at, if not gazed-upon) in a situation or situations, will lead to new perception of at least something more in the scope of "what's there". This will rather quickly go to perceiving more and then to perceptual/attentional shifts (applying some of our past-developed categories and processing to the new "material" -- AND at such also-adaptive points offering more "material" to refine or moderate one's responses/interactions). Here, there will be more in WM , and thus provide more that can be "associated-with" via the simple forms of associative learnings (now, with some new content: new parts and likely new wholes). These developments might be quite situations-specific at least at first, but they may develop to be concepts of rather great scope -- observations and other research which may well be possible are the ONLY things that will clarify all this. All we can say is that this will be some sort of BASIC KEY species-typical cognitive developments (with their inceptions, as indicated) during ontogeny [(birth to 18 yr. old, minimally 5 MAJOR hierarchical levels or stages are historically seen (but with several modern theorists hypothesizing phases within each level); all this can be seen in the overviews of great classic theories, still the most prominent in textbooks of General and Developmental Psychology)]. This very outline of this sort of process has NO limits (except human limits) and it includes the abilities to know, have, and use abstractions, INCLUDING contingent abstractions (holding true in just only some sets of apparently similar circumstances; AND, eventually, with ontogeny and the development of sufficient abstract abilities, ALSO enabling the ability to think and classify across previously differently-seen [(i.e. seen as different)] circumstances -- putting such complexes together in a concept -- this sort of thing including the most sophisticated abstract concepts and processing there is) : in some ultimate ("final", "rock bottom") analysis this all is possible because of demonstrable development and changes in the Memories, WHICH CAN BE RESEARCHED (as other characteristic of the Memories HAVE BEEN researched to date); AND the inceptions of new MAJOR LEVELS (those being with the "perceptual shifts" ... ) can also be directly observed and researched, using the new eye tracking technology (and ancillary technologies) -- and this will greatly guide one to fruitful research on the Memories.
The reasons, likelihood, justifications, better assumptions involved in having this viewpoint and understanding, AND the qualitative changes that which are developed this way (basically starting with key, adaptive "perceptual shifts") is what I spend much of my 800 pages of writing on: 200 pages, written some decades ago, and some 600 pages, written just in the last three years -- a lot of this latter being the job I did not finish back in the late '80s (and I really had no reason to pursue until the development of new technologies, esp. eye tracking and related technologies, came into existence to allow for testing my hypotheses). I also have take great pains in these latter writings to contrast this perspective and approach as thoroughly and completely as I could with the status quo perspectives and approaches in General Psychology and Developmental Psychology . And, to show all the ways this [what I have dubbed] Ethogram Theory is better in so many, many ways, including in its basic foundations, clearly more empirical (as directly as possible) than any perspective and approach heretofore.
I both show in details what is wrong with the "old" and much more likely correct and useful -- and more than plausible (and Biologically consistent and plausible) -- through this new general view. (Again, I provide related testable hypotheses -- verifiable/falsifiable.)
You will be able to see this new approach as better empirically than any other. Related to this: the great benefit that the FIELD of study is ALL clearly and firmly based (grounded/founded) on just 2 "things": (1) directly observable KEY overt phenomena (behavior PATTERNS, here in Psychology ) and (2) on certain clear directly observable and present aspects of circumstances/situations (aka "the environment) active in KEY past developments and/or present now. This is simply the return to the original and intended definition of Psychology _AND_, frankly, is THE ONLY WAY TO BE BEST-EMPIRICAL. (Think about it: NO MISSING CONNECTIONS.)
READ:
and
and
(see the Project Log of this Project to see many important Updates)
ALSO (not among the 200 pages of major papers and 512 pages of essays in my "BOOK", you already have been directed to) the following link gets you to 100 more pages of worthwhile essays composed after the 512 pages:
https://www.researchgate.net/publication/331907621_paradigmShiftFinalpdf
Sincerely, with respect,
Brad Jesness
What is the nature of fundamental requirements for one to conceive and develop General Artificial Intelligence (AGI)?
I would think, at the core, would be a modeling of the great adaptations of the actual human itself: its body , its senses, its responsiveness-es, its abilities, AND the abilities in cognition (representation and processing) that develop progressively and over EACH stage of ontogeny (largely: ages 0 - 18 yr. old) -- and with the latter influencing not only thought, but other responsiveness-es (e.g. the emotions; in other words, then: emotional development).
Now, you can start and stay with a good understanding of the human OR you can see the human as you see it and actualize your own hypothetico-deductive systems to have it progress in all relevant behavioral abilities.
Let's say you pick the first of these options (which I think is wise). Then what you need is a basic understanding of the human: human body, human senses, human responsiveness-es, and human abilities (INCLUDING those making "qualitative leaps" in their developments, during ontogeny -- these latter making up much of the "cognitive system", which can be conceived as the most-major AND central organizing system (as Piaget did) for all significant behavior patterns). Given all this, what else is needed? :
It should be clear that ALL faculties/basic abilities and responsiveness-es AND representation-and-thought abilities (including those "higher abilities": which "emerge" , unfold and develop with ontogeny) MUST BE GROUNDED CONCRETELY, specifically: clearly related to directly observable overt phenomena (behavior patterns). ALL OF IT.
Unless so-concretely "seen" (seen as at least related to key clear overt or overt-and-developing behavior patterns), it will not be possible to "mechanize" (here: program a machine) without being one with god-like insights. In other words, there will be NO AGI WITHOUT at least a basic (TESTABLE) understanding of ALL these fundamental behavior patterns (and their concrete "anchors", BOTH THOSE patterns continuing or now presently active, AND those key behaviors-and-circumstances active _AS_ the KEY POINTS of KEY DEVELOPMENTAL HAPPENINGS -- these creating new unfolding, lasting, and expanding representations and abilities). (These latter are also understood as clearly relating to some most-important directly observable concrete phenomena (behavior patterns, with corresponding situational aspects) and thus these also being "anchors" and, by virtue of some clear significant ongoing/continuing effects, they CONTINUE to be "anchors").
Now, does this mean the AGI developer needs no good thinking of just his own? NO. Unrefined inductive understandings (guesses) may be tested. And, proximate causal-type relationships can be hypothesized between THIS behavior pattern and THAT (even using some good hypothetico-deductive system, BUT this system must AT LEAST PRINCIPLED, IN TERMS OF LIFE (BIOLOGICAL) PRINCIPLES (e.g. a basic one: homeostatis)) [(I also suggest using the terminology of classical ethology, presented as-to-be-used in my earliest long paper.)]
The great news, of course, is: AGI People can test their overall system major-aspect-tried by major-aspect-tried over and over and thus be much facilitated in making corrections.
Now, what may be your final question: Where does one find such a wholly empirically-based, concrete-based understanding of behavior patterns/responses TO BEGIN WITH. Answer: I do my best to offer such a system through my "Developing a Usable Empirically-Based Outline of Human Behavior for FULL Artificial Intelligence (and for Psychology)" PROJECT and my " Human Ethology and Development (Ethogram Theory) : A Full-Fledged Paradigm for PSYCHOLOGY" PROJECT. And, I believe, that considered for being in the most-empirical terms and the most-concrete terms, the writings associated with these Projects are the best offered today. **
Start at my Profile, Brad Jesness and then look for those 2 just-named Projects (and see all Log Entries, aka Updates under them). Also see and read:
and
and, when reading
, also see the Project Log of this Project to see many important Updates.
P.S. Plus, for a final 100 pages of recent essays (not among the 512 pages in the collection of recent essays, you already have been directed to), yet also very worthwhile essays composed after the 512 -page Collection , see them in this pdf:
https://www.researchgate.net/publication/331907621_paradigmShiftFinalpdf
** FOOTNOTE: ** IMPORTANT NEWS ** : I recently presented summaries of my "system" that clearly indicate 2 types of basic and likely needed (and real) CONSTANTS: some constancy of our Memories faculties (on the more-purely endogenous "side") AND some constancy OF THE PROCESSES always involved in learning and development (most clearly and presently involving aspects of the external world), these simply based on the fact that the simple, well-defined FORMS of associative learning are intimately and always involved in all behavior pattern change (NOTE, here, that what is constant IS THE FORM, which otherwise differs enough in content to be seen (or have to be seen) as "different"). (Also NOTE: the constancies of the Memories seem also at least mostly a matter of "forms", though some clear abiding numerical limits (properly delimited) may [always] apply here and there (everywhere?) -- and, thank goodness for the latter: because we likely need somewhere always in the system some complete certainty, i.e. to some numerical degree.)
For one reason, and maybe a more direct one, it has to do with issues of the nature of visual working memory and visual long-term memory (very important, general issues). For a great Article on this, see:
Now, in order to use my writing to best effect, let me basically quote a letter to the author (quoting myself):
First, the letter's Title: " [From where] do some top-level discriminations (familiar/recollection) [come]"; now continuing:
"Dear Professor Mark W. Schurgin
I am a "top down" guy (the most top-down there is) and a complete empiricist and guy that defines Psychology (or at least his Psychology) in terms of behavior patterning and environmental/circumstances aspects ONLY -- i.e. these environmental.../behavior patterns aspects IS ALL . I am a neo-Piagetian and believe that, with new technologies (e.g. eye-tracking and ancillary machine processing), we can literally discover the concrete bases (i.e. directly observable overt behavior patterns in situ), AT LEAST at the inception of each KEY new set of significant behavior patterns related to major cognition and major cognitive processes developments. I believe thus we can actually identify the bases of qualitative shifts in levels/stages [(i.e completing Piaget's theory (basically, his Equilibration TYPE 2 -- the "balance" between stages) by finding the primary bases of stage/levels qualitative changes -- and all most empirically: in the end, I provide PIVOTAL concrete testable (verifiable/falsifiable) specific hypotheses TO PROVIDE THE real FOUNDATION of THIS NEW THEORY)].
To put it in other words, the Ethogram Theory tells and shows a way to find the concrete grounding (foundations) of abstraction and abstract thought itself -- these major cognition and cognitive processing phenomenon.
This, indeed, would be one "place" (quite literally) where some major bases of familiarity and recollection BEGIN. To come to an understanding of my view/approach, a rather substantial amount of reading is involved and necessary ( a LOT of it with respect to its foundational differences with some modern baseless assumptions (replaced in EThogram Theory) and to, correspondingly, contrast it with modern approaches to research; the rest of the writing is to as clearly as possible contextualize where/how these KEY changes occur IN BEHAVIOR PATTERNS ... (the nature of and development of the Memories are also always involved) AND I OUTLINE THE NEAR-SPECIFIC NATURE OF TESTABLE HYPOTHESES (which finally comes up in my writings, where I most-clearly "channel" biology). 800 pages: Two hundred of the pages come from the original 1985 treatise AND from two other major old papers AND, then, the other 600 pages are recent essays written in the last 2-3 years (necessary to put the Theory in context, as indicated, and then to get to rather specific hypotheses).
Anyway, here is how to get to my writings: [(someone's reading, understanding, and "belief in" this system may be essential for real progress in Psychology, and it finally becoming a true science (as empirical as any); it is "at your feet" and just a several select others, I place this Theory and all the related writings for a chance of beginning the seeking of much more clarity and of major advances in Psychology; THAT IS IMPORTANT)] :
See, AND READ:
and
and
(see the Project Log of this Project to see many important Updates)
Sincerely, with great respect,
Brad Jesness
P.S. The main reason for this P.S. is to direct you to the final 100 pages of recent essays (not among the 512 pages you already have been directed to); these are very worthwhile essays composed after the 512 pages:
https://www.researchgate.net/publication/331907621_paradigmShiftFinalpdf "
(end quoted of myself)
Do you now understand some major reasons WHY Psychology should CARE about Ethogram Theory?
Dear colleagues, which one is more significant?
best
Since Generalized AI has no human brain, they must be aware of all pertinent "external" behavior patterns and behavior pattern markers AND effective environmental aspects: Ethogram Theory with its body of 500+ pages of recent supporting essays (following some early, courser, yet must-read, foundational papers) provides just this, focusing only on clear behavior patterns and environmental aspects AND AS THEY UNFOLD WITH ONTOGENY -- ALL with "external" (directly observable overt) aspects AND environmental contingencies (including sophisticated Memories, for context; YET: ALL aspects, in good part, at-least-one-time-seen or clearly indicated OVERTLY).
This is why AI people should look at my Ethogram Theory , etc. AND the related General AI Project: https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology
But also see the Ethogram Theory Project and all its References and Updates (in the Project Log): https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory
For General AI to use Psychology, this is the only choice. It is also a clear and parsimonious choice and fully empirically based/founded/grounded (and complete for having/providing for the full basic foundation/base "containing" cognitive-developmental hierarchical system).
ALSO: This is also completely good for Psychology as well, for a good perspective, approach, and good hypotheses -- BETTER THAN THIS FIELD HAS NOW. I now turn to AI because Psychology is not sufficiently empirically based or "driven" to be this way. (I turn to others who must understand and 'see" behavior patterns correctly and have good empirical testable hypotheses, such as I provide ; perhaps, again, Psychology will find itself FOLLOWING information-processing.)
By the definitions *, the Memories comprise EXPERIENCE ITSELF: why not more research?
I believe it is because of the prevalence of DUALISM in our societies: this leads to "it" (the Memories) being considered a "separate faculty" !
* FOOTNOTE: The definitions are quite well-established already, through empirical research -- certainly some of the best in all Psychology.
I made this a Discussion, rather than a Question, because I am not really asking a Question; just looking for your thoughts (philosophers need not apply). I seek to help change this disgraceful and irrational problem situation. In my view it would be stupid and ridiculous, if it was not so very neglectful and damaging (of our understanding(s)).
If I had/have to make a Question, I guess it would be to ask for further information on what is wrong with Psychology people. Dualism is a silly philosophy, not a life-style.
It seems like to these "academics" , etc. LIFE IS LITERALLY JUST A GAME .
I'll start by repeating the title, above: What psychologists have not yet realized is that eye-tracking technology, etc. ALLOWS FOR AN _OVERALL_ MORE EMPIRICAL APPROACH !!
The new technologies are not just a tool for the "empiricism" they already do!
I have described and formalized into concrete, now-testable hypotheses that which would establish the most empirical grounding for "abstract" concepts. More empirically grounded and founded than anything heretofore, without a doubt -- and the view/approach is biologically LIKELY and this approach to research (on some new CONTENT it is good for) has not yet been tried. It involves "believing" nothing (actually believing MUCH less "on faith"); it really involves simply more empiricism, more direct observation [ specifically: discovering the DIRECTLY OBSERVABLE OVERT behavioral foundations for the qualitatively different levels/stages of cognitive development -- and HOW __LEARNING__ ITSELF (presently often ill-defined) CHANGES WITH THIS NEWLY OBSERVABLE PHENOMENON, and the consequences, ALSO ].
I have tried to clearly outline (including ending with most-empirical and testable hypotheses): the inception of abstract concepts with "perceptual shifts" (and thus providing them a concrete in-the-world foundation).
Again, the theory has to do with "perceptual shifts", NOW -- presently (at this point in history) -- able to be SEEN with new technologies: SEEING what subtle overt behaviors likely occur at the inception of each higher level of thinking during ontogeny. The outlook and approach is a cognitive-developmental theory -- i.e. of human child development -- and for finding of more major parts of who we all are).
You might well want to take a look:
The perspective and approach especially and specifically has to do with: perception and quickly/soon after that: attentional and gazing changes which likely occur at the inception of new qualitative cognitive developments (with ontogeny) (and literally, sensibly, set them off).
The following theory, with its most-empirical and testable hypotheses, indicates (clearly, with as good and totally empirical "guesses" as are now possible) the nature of these perceptual/attentional shifts accompanying (actually: "starting off") major qualitative changes in cognition:
Here it is :
Minimally, read both of the major writings:
Not only
BUT ALSO the much, much more recent:
(these much later, recent essays filling in some of the aspects of the treatise not originally provided, as stated directly in "A Human Ethogram ... " itself).
This theory does a LOT else correctly (unlike other theories) in abiding by necessarily applicable principles and seriously trying to have a perspective and approach which has ALL the features and dimensions a science theory should have . It is parsimonious. It uses the well-developed vocabulary of CLASSIC ethology (not the bastardized 'ethology' of today).
Psychologists may ignore this, but that would be just ignoring a most-empirical way of study (and ignoring some of the most-empirical, most-testable hypotheses). In short, it is scientifically WRONGFUL for psychologists to ignore this.
P.S. ALSO: Because all of this is so much more concrete, this theory of development and changes in learning should be THE theory of most interest to those trying general artificial intelligence.
I am thinking of Psychology researchers and theorists. Is it their duty to science to investigate the possibilities of important new tools and possible discoveries that involve empiricism at its best: attempting direct observation of possible/likely important overt behaviors, heretofore not seen?
For example, IN PARTICULAR:
Hello
Can any one suggest any labs or places that is working in the area of culture and cognition (Effects of culture of cognitive processes/culture and social cognition/cultural differences in cognitive development) in India or in association with India.
I am Rahul, (Masters in Neuropsychology) from India and actively looking for associating into projects working in this area.
thanks in advance for suggestions.
Hello RG community,
I was wondering if there is a test to assess children' working memory in a group setting (class room). The sample is composed by 3rd-to-5th grade italian children.
Thank you in advance for your help,
Antonio
Models and [ non-concrete * ] Mechanisms: Don't they seem to have the same problems with respect to actual phenomenology and what is real?
Maybe they are temporarily necessary, but should be avoided and should be bettered (AND REPLACED) as good research progresses. If this betterment does not happen, you are not doing at least some of the essential research (likely observational). PERIOD.
Isn't it possible that the best understanding is just the knowledge of, and understanding of, SEQUENCES? (Of course these can be "made sense" of, within the "whole picture", i.e. the greater overall understanding -- and there is "purpose" or direction to each behavior pattern [in the sequences].)
{ ALL this increases the key role (and sufficiency) of all the simple [ basically known ] sorts of associative learning ALONG WITH OUR SEVERAL SORTS OF MEMORIES. "Outside" of innate guidance WITH PERCEPTION/ATTENTION (including innate guidance in later stages/periods of development, with behavioral ontogeny) (and this innate guidance being WITH the simple learnings and Memories) AND their consequences with behavior patterns: the well-understood simple learnings may ultimately provide "the 'glue' for 'the whole story'" , otherwise -- i.e. other than the key "driven" directly observable sequences **.
AND NOTE: NO need whatsoever for special sorts of theorist/researcher-defined types of learning, e.g. "social learning", etc.. NO need for ANY of the "metas", presently a major homunculus.
This perspective "conveniently" has the advantage of be conceptualizable and is able to be clearly communicated -- requirements of ANY good science. It is within our abilities (as adults, at least at particular times) to actually 'see', i.e. to have and to provide REAL UNDERSTANDINGS. In my view, the other "choices" seem not to have these distinct characteristics (so, the perspective above is either true OR we all may well be "screwed").
* FOOTNOTE: "Concrete" meaning: with clear, completely observable correspondents; AND, likewise for models, with any promise (of progress and replacement).
** FOOTNOTE: "Directly observable" meaning: can be seen (and agreed upon AS SEEN) IN OVERT BEHAVIOR PATTERNS (AT LEAST AT KEY TIMES, e.g. with the inception of new significant behavior patterns).
--------------------------
P.S. This (above essay) may seem "self-serving", since I have a theory putting all of the positions/views above TOGETHER cogently and with clear testable/verifiable(refutable) HYPOTHESES (using modern technologies, eye-tracking and computer-assisted analysis). See:
See especially:
https://www.researchgate.net/publication/286920820_A_Human_Ethogram_Its_Scientific_Acceptability_and_Importance_now_NEW_because_new_technology_allows_investigation_of_the_hypotheses_an_early_MUST_READ
and
https://www.researchgate.net/publication/322818578_NOW_the_nearly_complete_collection_of_essays_RIGHT_HERE_BUT_STILL_ALSO_SEE_THE_Comments_1_for_a_copy_of_some_important_more_recent_posts_not_in_the_Collection_include_reading_the_2_Replies_to_the_Comm
AND
the Comments to (under) the second-to the-newest Update on the Project page: https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory (for EVERYTHING)
That is the question, lover of life, lover of others, empiricist or scientist ; thus finding the actual sequences which are causation(s) (aka the proximate causes). Better and better 'seeing', less ignorance ... , less confusion. Said also to be with less wanting and/or greed and with less suffering, as well. And as more is found, more opens up. Could anything else be the case? [ Such conclusions can come from checking the research on the Memories which, as they are (by definition), must be experience itself. ]
Let me give an example of what I speak of above (an example in my field: the very important and most vital field of developmental psychology (very much 'including' ontogeny) ). In Psychology what I am talking about is: proper perspective, properly viewing Psychology ("psychologizing" one's psychology, in a proper way, if you will) and THUS 'seeing' the ways there are of realistically (and rationally) AND thus actually having/doing conceptualizing and thinking (<-- those very things) as they really are (and of getting one's own and one's Subjects' real limits and abilities defined). In attempting this in Psychology (or in any science) one must "believe in" and maximize empirical grounding (all that is possibly there and detectable), showing EVERY SORT OF BEHAVIOR, related clearly and in an important ways (at least at their inception), TO directly observable particular overt behavior patterns of the Subject *. AND, this is BY DOING IT (for the researchers and the Subjects) in the REAL terms of the basic capacities of their species-typical Memories (also knowing and considering the hierarchical relationship of more adult concepts and thinking, compared to that of children) -- KNOWING ALL THAT, and using ALL THAT, required before doing decent psychology that will lead to real, lasting, and progressive discoveries on the development of cognition (that being central to other major other behavior patterns that develop). [ It may be hard, but you will get used to it; and, it is necessary; AND, actually, it is likely less hard to do than the 'theoretical,' unjustified "contortions" presently done today (which inevitably "dead-end") . ]
If you can but only agree, please read my writings (most all -- 1000 pages worth -- available through ResearchGate). [ NOTE: My writings include specific hypotheses for the direct observations of the overt behaviors central to thinking and concept development -- each of the major inceptions -- all found/put into the proper contexts (and "spelled out" as different and as alternatives from today's perspectives/'procedures' -- these latter also "spelled out", and shown in detail, as lacking and incorrect). ]
* FOOTNOTE: This perspective and rightful attempt (approach) AT/for DISCOVERIES is exactly what I outline as clearly as possible in my writings [ "as clearly AS POSSIBLE", that is, before the new, CLEARLY-PRESCRIBED, needed research, with clear testable hypotheses, is done (i.e. before having those hypotheses indeed tested) ].
Dear Friend, I would like to be associated with Indonesian AJT Cognitive Assessment Development Project as one of my Ph.D student is doing research related to Media and Cognitive Development. I am his research guide. My Ph.D was on "Impact of Micronutrient Malnutrition on Child Development - A Diagnostic Study" which i completed in 2007 and now working as a Research Officer in a college. Kindly let me know. Thank you.
I'm currently running a study looking at cognitive development during adolescence. We're recruiting two age ranges with modest sample sizes (N=25/group). I've posed the question to several colleagues, but no one seems to have a good answer. Is it acceptable to include two siblings a cross-sectional study that's not intended as a sibling-controlled study? Or is it common practice only to include one child per family? One set of siblings might not make a huge effect on the results, but I can see where an group-level imbalance of related participants might have unintended consequences on the variability of the sample.
I was thinking of attachment, new technologies in early years or outdoor provision or cognitive development in early years
Hello, dear friends.
I have a patient (boy, 6 years old) with a severe epileptic syndrome, which leads to the seizures with tendency to status epilepticus. I observe him for a long time, and I've noticed one interesting phenomenon. After the severest seizures, which end in emergency service, his physical and mental development goes up exponentially.
For example. A week ago he tried to learn to write the cyrillic letter D (see the image 1; the uper inscriptions are the examples). Very poor. Then was the severe seizure, and he didn't try the exercises at all for a week, and today he has taken the paper and written the letter at his own initiative. Now it looks like very good inscription of prescholar (see the image 2).
Other example. A week ago he tried to read, but couldn't understand how to prononce syllables. And today he amazed his parents as he started to prononce good syllables of Consonant-Vowel pattern. It looks like the new grammatical rule arose in his neuronets.
And there are lots of other examples of such behaviour from the beginning of his illness. Not only this particular case. What could be the reason of this? Could you please give me some clues or link what to read on this theme? Thank you.
Isn't it pure psychoticism to have the most fundamental unit of analysis of a presumed foundational behavior pattern of AN organism INCLUDE MORE THAN ONE ORGANISM'S BEHAVIOR necessarily (or really AT ALL (ever), FOR THAT MATTER)? Yes, yes, yes. YET see the following recent papers INSIST ON such an explanation NECESSARILY (as necessary -- i.e. no other "reasonable" way):
Enactive Mechanistic Explanation of Social Cognition
and
Mechanistic explanation for enactive sociality
They claim 25 years of such just-pure-speculative (and by-now obviously useless) "conceptualizations".
This embarrassing nonsense is what can happen when you do not know or do not examine or analyze your true base/foundational assumptions YET THOSE ARE very poor, baseless, and UNPROVEN AND MOST-LIKELY _NOT_ TRUE (because of inconsistencies with BIOLOGY, as I have clearly indicated in my essays). [ It is desperation for progress with a basic view and approach THAT CANNOT MAKE PROGRESS rationally -- it is desperation in science/"science" . ]
How can you take or recommend a view or approach that will NEVER have any direct evidence?
Embodiment has NO direct evidence for it (OR any direct evidence even clearly related to it) **, and never will: it is worse than bad science: it is not even science: see: https://www.researchgate.net/publication/303890892_The_poverty_of_embodied_cognition
Article The poverty of embodied cognition (full text at: link.springer.com/article/10.3758/s13423-015-0860-1 Add the https:// yourself, so RG does not hijack the link AND DIRECT YOU TO JUST THE ABSTRACT)
See also my Comments below the Project "declaration" (seen in the very top of this post).
** FOOTNOTE: This is to such an extent, that "embodiment 'theory'" or "enactivism" will technically NEVER be able to present an acceptable [scientific] hypothesis. Good approaches do a LOT of clear hypothesizing.
Re: cognitive-developmental psychology: Is it a bad sign if one has only done ONE thing in her/his entire lifetime?
This is basically, in part, a confession. If you knew how true the "one thing" was in my life, you would likely consider me lazy and privileged. I can accept both labels and can clearly see it that way (at least from the standpoint of some very good people). Moreover, I have had the ability to have anything and everything I thought I needed -- essentially at all times.
But, perhaps as is the only interpretation imaginable, you suspect I am making such admissions just to further the exposure of my perspective and approach. That is completely true. And, I do contend that (with having all resources), I lived virtually all the years of my life looking for a complete and the best thoroughly empirical perspective. Even in my decades of college teaching (more like 1.5 decades), my courses and presentations had coherence most certainly as a function of my views. THUS, indeed, in fact: I have never done anything else in my life other than that needed to produce the papers, book, essays, etc. that I present here on RG (or make readily available through RG). To have a picture of my life, one should imagine about 30 years of it operating much as a hermit (for all that can be good for -- and I do believe it can be good for something).
I started with a core and moved carefully in adopting any aspect of my perspective (basically starting from the position of just what is possibly at-the-very-least needed, and maintaining extreme parsimony). And, again, I am a most thorough-going empiricist, believing that EVERYTHING has a core foundation of some behavior which, at least at some key point, is both overt (though maybe quite subtle) AND directly observable (and now practically so, via eye-tracking). My entire perspective and approach relies pivotally and mainly on such foundations and otherwise only on the best findings and extremely widely-affirmed processes IN ALL OF PSYCHOLOGY (things showing the very best inter-observer agreement). All this is not any kind of abstract or wide set of things. The other prime objective ("directive") has been to NOT [just] link but PUT behavior (behavior patterns) clearly IN a biological framework -- showing as much as possible the "biology of behavior"; this had the rewarding result of eliminating critical and serious dualisms, esp. nature/nurture.
Assumptions or presumptions (pseudo-asssumptions) in Psychology had to be exposed as both unproven and not well-founded. A half dozen central "assumptions" have been replaced in my system BY BASICALLY THE OPPOSITES -- these assumptions being fully consistent with biological principles and more likely true. I also show in my work how to use all the terms of classical ethology, this also allowing or furthering the "biology of behavior".
In short, though this should be to some degree a shameful confession (and many would have to believe that is part of it), my work is MINE (compromising nothing; adhering to principles) -- and it is good **. Please take some time to explore it, starting at: https://www.researchgate.net/profile/Brad_Jesness2 Thank you.
** FOOTNOTE: The perspective and approach is explicit and clear enough for artificial intelligence also -- a good test. BUT: For the great advancements needed in Psychology and major practical utility in AI, we need DISCOVERIES, the nature of which are indicated in testable (verifiable) hypotheses, clear in my writings -- MUCH awaits those discoveries. The same discoveries are involved for either field.
P.S. For 20 years of my hermitage I did have the strong "hobby" (avocation) of JavaScript programming; I never made any money from this. I tell you this just to make sure the portrayal is accurate -- and to in no way mislead. (See http://mynichecomputing.org , if you are curious.)
There have been several learning theorists now that speak of non-associative influences on learning. Here are some quotes from a few:
(My important Comments follow the quotes, below.)
QUOTES From "Three Ways That Non-associative Knowledge May Affect Associative Learning Processes" by Thorwart abd Livesly:
"While Mitchell et al. (2012) favored an explanation purely based on conscious reasoning processes, where participants deliberately attend to the cues they believe are important, a viable alternative is that attentional processes are brought under conscious control and thus let non-associative knowledge influence the course of subsequent learning."
"In some circumstances, associative activation of the outcome may form the strongest available evidence about what is going to happen when a cue is presented, or the strongest indicator of how the individual should behave. But under other circumstances, for instance where it is very clear that a deductive reasoning process should be used, associative memory retrieval may play a relatively minor role "
"a viable alternative is that attentional processes are brought under conscious control and thus let non-associative knowledge influence the course of subsequent learning. This source of influence does not necessitate that non- associative expectations fundamentally change the operations of the associative network itself, merely what it receives"
"In addition, if non-associative knowledge can affect the way stimuli are represented then this knowledge may also change the manner in which associative retrieval generalizes from A to AB"
---------------------------
QUOTES From Mackintosh Lecture: Association and Cognition: Two Processes, One System. I.P.L. McLaren et al:
" ... does not shy away from placing associative processes at the very centre of our dual process account, and postulates that propositional processing is built upon associative foundations"
"... we are propositional entities constructed from an associative substrate."
----
QUOTE From
Moving Beyond the Distinction Between Concrete and Abstract Concepts Barsalou et al:
"Conversely, when people generate features of abstract-LIT concepts, they typically generate external elements of the situations to which they apply. "
-----------------------
My IMPORTANT COMMENTS:
Problem for these theorists/researchers is that their "new propositions", "non-associative factors" and "new generalizations" ARE INTRACTABLE. Such phenomenon seem to be inferable, indeed, but they do not have a way to find the source (any empirical grounding). Thus, these theories at present have no empirical referents at major points to "get to go where they want to go".
Well, I actually address the same things: in EFFECT providing for new propositions (used in deductions), new generalizations, and what appear to be non-associative factors. BUT, my theory sees the origin of these effects IN QUALITATIVELY DIFFERENT cognitive stages, and due to "perceptual shifts". BUT, here is the REALLY GOOD NEWS: I indicate an empirical way to discover the "perceptual shifts", using new eye-tracking technology and computer-assisted analysis. I describe what to look for in enough detail to do the eye-tracking studies, during ontogeny -- at key points. Thus, my theory, which provides for the same kind of shifts in learning HAS TESTABLE HYPOTHESES. If the hypotheses of my ethogram theory are verified (and they can be is correct), we will at least have found the concrete directly observable overt behavior patterns associated WITH THE INCEPTION of that which yields the new abilities/phenomenon.
One other thing: Because the proximate cause (outside environmental factors and contextualization from the Memories -- which both can be seen as the other simultaneous proximate causes) IS "perceptual shifts" then nothing is divorced from ASSOCIATIVE LEARNING. This is also the end of the nature/nurture false dualisms. All still involves associative learning -- and no strange "non-associative" stuff.
See:
and
Also See:
I understand that a longitudinal study assesses change in same variable among same respondents over time. For example, changes in cognitive development over time. A time-lagged design involves study of different cohorts over different period(s) of time.
My question is, If i have three variables, IV, Mediator, DV, (not attempting to assess change in the same variable over time ) and i intend to see how IV-->Mediator and how Mediator--> DV, that would require the same sample of respondents and different time periods, vis., T1, T2, T3.
For my study, does it constitute a longitudinal or time lagged design?
Is there a specific time period for IV-->Mediator--> DV effect to take place, hence is there any specific time difference required for data collection?
I can assure you my way is empirical and all major hypotheses are directly testable (via direct observation of overt behavior patterns). It is a viable approach, with all testable hypotheses, and with explicit, well-founded and biologically-consistent assumptions behind it all. Eye-tracking technology will be needed and perhaps computer-assisted analysis. FIRST, See:
then you must see the recent LARGE Collection of Essays explicating and fully justifying my approach and clearly indicating the positive consequences and ramifications : HERE'S the BOOK:
* PLUS * : YOU MUST SEE THE COMMENT _AND_ THE 2 REPLIES TO THAT COMMENT (below the BOOK's shown text), to have all the needed specifics.
EYE-TRACKERS: If you do not want to read as much as I ask people to do above, you should be able to get a pretty good idea of what would be involved and if you could do it by just reading COMMENT _AND_ THE 2 REPLIES TO THAT COMMENT on the same page as the BOOK. (This is less than 10 pages.)
--> Can modern eye-trackers do what I clearly indicate needs to be done? <--
Is the following list the characteristics of the things which are the bases of psychological understandings for General Artificial Intelligence?
The material, below, from https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology "Project Goals (for General Artificial Intelligence and psychological science)" (below, slightly elaborated). (Also, this Project is where you can find additional information and "specs".)
Project Goals (for General Artificial Intelligence and psychological science)
Project strives to be:
* nothing more than needed, while WELL-ESTABLISHED, BEING ALWAYS clearly-related to the most reliable, strongest scientific findings in psychology (this is, in particular: facts and findings on the Memories)
* enough to embrace a good part of everything, providing a very likely main overall "container" -- with EVERYTHING addressed, founded on, grounded on, OR clearly "stemming" from: discovery of and direct observation of overt behavior patterns (done by providing clear and likely ways to discover the specific, direct, explicit, observable empirical foundations to qualitative cognitive stages -- something completely lacking in modern psychology otherwise). All hypotheses related to all positions (in THIS LIST and in any References) ARE testable/verifiable (at least now, with eye-tracking technologies and computer assisted analysis).
* having ALL that is needed AND which is all-concrete (explicit, specified, or FULLY defined-as-used or thusly definable), at the same time: so as to provide for Generalized Artificial Intelligence and good science, otherwise. [ There may be one seeming exception to elements being "clearly specified" : the "episodic buffer". And that can be defined "relationally", simply having a state plausibly/possibly inferred from all the [other] more concretely defined elements (with their characteristics and processes).]
* providing for self-correction and for continuous progress as science (actual psychology) (as real and good science, and good thinking, is) And, not coincidentally, providing for continuous development of the AI "robot" itself (by itself; of course: experience needed).
* consistent with current major theories to the full extent justified, but contrasted by having a better well-established set of assumptions, thoroughly justified and explicated. An integrative perspective, equally good for appropriate shifts in all theoretical perspectives (in the end, each theory allowing MORE, and being more empirical)
* proving (by amassing related evidence of) the inadequacy of current perspectives on and approaches to behavioral studies (addressing current psychology-wide pseudo-'assumptions')
* an approach which ends obviously senseless dualisms, e.g. nature/nurture; continuous/discontinuous, which just impede understanding, discovery, and progress. This is inherent in the "definitions" of elements and processes (all from observations or most-excellent research; and largely inductively inferred) .
It is good for psychology (it IS psychology) and General Artificial Intelligence, as well.
NOTE: (1) Nothing above should be seen as merely descriptive (this implies too much tied to certain situation(s) and/or to abstraction(s), always lacking true details; it also probably implies too much related to human judgment).
(2) Nothing -- no element or constellation of elements -- are operationally (as they actually come together and 'work') as envisioned only by, or in any way (at all) mainly by, human conceptualization OR human imagination.
(3) The Subject is ALL and shall be seen just as it is (at least eventually), and should always be THE guide phenomenologically at all times to move toward that goal.
I believe this is the only way our algorithms will correspond to biology and that AI will really simulate US.
[ P.S. I have tried to much more specifically direct people to answers to Questions such as above, FOR BEHAVIORAL SCIENCES in general, in my major papers here on RG (esp. "A Human Ethogram ... ") AND in my many, many essays, now most in a 328-page BOOK, Collected Essays (also on RG). General Artificial Intelligence is, in effect, a behavioral science itself. ]
The following link is a good place to start (and it provides links to other writings):
https://www.researchgate.net/post/A_Beginning_of_a_Human_Ethogram_seeing_the_inception_of_cognitive-developmental_stages_as_involving_a_couple_of_phases_of_non-conscious_perception
One could argue that a much more empirical set of data, based on concrete and directly observable overt behavior patterns, detectable with eye-tracking technology, at key times, yet in "real time" (i.e. in then-current behavior patterns), could be used, AND HYPOTHESES DIRECTLY TESTED, as explanations for concept development. Start at the following Question:
The "sensori-motor" explanations have turned out as not well-founded and based on VERY indirect evidence, at best, and seen IN PEER REVIEW, as having "no future":
Hello scientific world,
I am looking for this in PDF, have some issues with uni library off campus system and cannot get it....
Piaget, J. (1964). Part I: Cognitive development in children: Piaget development and learning. Journal of research in science teaching, 2(3), 176-186.
Thank you
Witold
Indeed. And, this is what I have tried to provide in the Project, https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology .
This is real good for psychology, too -- where things also need to be clear and specified. Theory that is good for AI is simply good theory. Thus, the Project above completely "fits with" my other Project , on a human ethology (Ethogram Theory) (and the theory and hypotheses there): https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory (The theory is presented in a much more organized way in this latter Project, plus the full justification of the theory and its ramifications are made clear -- but it is a lot more to read.)
No.
This notion or belief, and THAT is all it is, no matter what BIG impacts on thinking it has, and no matter what big effects such beliefs have in creating firm limitations on thinking (not even allowing people to think of certain phenomenon). [ In effect such false closures and thinking (and they are there) is a clear sign that something is wrong. ] This all-innate-at-birth-or-in-infancy notion of THE innate factors -- resulting in no real innate guidance thought to come up later in childhood -- and related beliefs (used as "assumptions") is from philosophy and not from ANY good observation and not related good understanding. 'Learning' explanations are given which have NO clearly related direct evidence at all, yet researchers and theorists are satisfied with what they basically just make up (and then attribute to such "self"-functioning of the organism), e.g. the fictions of 'executive' functions and all the "meta's" (a "man" within "the man") OR wild (unsupported and unsupportable) ideas about 'social learning' AND/OR the fictions of literal-supposed "EMBODIMENT" of 'action' giving us our thought -- such pure garbage being a big part of 'explanations'). [
[ Apparently, for higher learning, logic can just pop-up and pop-out when the time/circumstances are right (when earlier learnings have been well-processed); this is apparently where developmental maturation factors ORIGINATE INTERNALLY (!!???), no matter how not-environmentally based the POP-UP logic seems to be in its origin, i.e. NON-EMBEDDED. It is basically hocus-pocus. ]
Old-time philosophers can't "cut it" nowadays.
Because of these 'garbage' beliefs, we cannot differentiate different [levels of] learning -- this resulting in not defining or understanding learning well at all.
So many things work better and are seen in more understandable ways IFF one can see fundamental qualitative shifts in behavioral [response] patterns occurring (even if the beginnings of such behavior pattern changes are kind of simple and caused by seemingly simple CHANGES in VERY basic behavior patterns -- that works!). I am at the point where I basically do not need to listen much to people that think learnings are all basically the same and completely ubiquitous, operating in an "uninterrupted" way. (And, don't talk to me about "social" and "cultural" factors BECAUSE the individual organism clearly remains the "unit of analysis" and center of ALL true understanding -- if there is no account with the individual, there is NO accounting at all.)
Hey, graduate students: if you buy all the "crap", you are "tools".
[ P.S. Note how "innate action patterns" (or anything meaning that) are not even topics here on researchgate. Come on, people ]
AWAY ! Unfortunately.
Nowadays, proximate explanations are, at least almost always, in terms that are neurobiological, endocrinological, or molecular-genetic . There usually appears to be absolutely no concept of a behavioral pattern or change in a behavioral pattern (either, of course, in response to aspects of the current environment) AS themselves a proximate cause of a new behavior pattern [change] -- I.E. a true observable behavior pattern phenomenon proceeding, and needed for, the key subsequent behavior pattern change. I believe there is a BIAS there , due to our philosophical cultural traditional-beliefs.
And, this is a problem.
THIS PROBLEM HAS NOT ALWAYS BEEN THE CASE, and certainly has not always been the case in ethology. The ethology Tinbergen and Lorenz were given a Nobel prize for often did have one behavior pattern as a proximate cause for certain behavior pattern(s) that followed. This is what needs to be re-learned and abided by or real ethology may be lost. Such a relationship between behavior patterns was a hallmark of classical ethology.
Modern ethologists failed to have the "backbone" to maintain that which was most distinctive and best about ETHOLOGY. They basically "caved in" to how others characterized them. (Now, the field is indistinguishable from comparative psychology and/or evolutionary psychology.)
Listen up, International Society for Human Ethology !
Real science, real biological science, the real biology of behavior DEPENDS on behavioral pattern(s), themselves, being seen as a major proximate cause of new behavior patterning [and of behavior pattern change]. Ethology must return to what it uniquely was OR THERE IS NO CHANCE OF BEHAVIORAL SCIENCE. I am sure, if I were a analytic philosopher, I could argue this. It really is logically and scientifically irrefutable. Behavioral sciences, of all "stripes", have been becoming more and more stupid -- there is no better word (since they defy biology and defy science). (Simply look for the lack of the words "behavior pattern" and you are on the way to seeing the whole problem.)
P.S. Consider this a big "kiss ...." to our philosophical cultural heritage; certainly the stupidity is a "love letter" to those arm-chair thinkers.
I want to present you with a possible particular concrete example (instance) of a perceptual shift, i.e. the inception of a stage shift (in 'seeing' and [at first, very vaguely,] in some sense IN cognition), showing all the 4 phases of a perceptual shift for the overall process of the beginning of a qualitative stage shift part of the development of cognition -- before purely associative learning "holds sway" by itself again.
This hypothetical example comes from the ape (gorilla) social "world", from which our abilities to have progressively developing levels of concepts and thinking likely first evolved. Well, HERE IS IS:
Think of an child ape, not an infant but perhaps a mid-age-child individual. He has from his previous development a conceptual idea of the dominant (adult) male gorilla (and his behavior patterns, relating to this).
But, then he "notices" that this dominant male, at times rushes towards other adults, to seemingly show other ways to express his dominance (or other aspects of that dominance) which he has not shown before (or which the young ape has not clearly seen, noticed, or processed before).
This is the kind of thing indicating [with him, this child] innate guidance, given he has good, refined earlier knowledge: AT FIRST BEING some gap in the child ape's conceptual understanding of the OVERALL structure of this adult dominance behavior. That "gap", (phase 1) of the now first-emerging of a NEW perceptual shift, may show itself in a situation (or early situations) as just something involving automatically vaguely orienting TOWARD the key situation and behaviors (and would be shown behaviorally simply in prolonged gaze when/after this dominance phenomenon shows itself).
Soon (perhaps VERY SOON) he will better see such dominance events WHEN THEY OCCUR (because of the specific "gap" existing in his understanding); this second phase (of the perceptual shift) will show clearly: orienting to the aspects of this new-to-understand type of dominance expression (still, for the most part, not conscious).
In the third phase of the shift, he will reliably have seen regularities as he continues good orientation needed to observe things associated with this dominance event. HERE he can be said to be expressly and explicitly and consciously ATTENDING to occurrences of this event.
Finally (in the fourth phase of the shift) he will integrate the essentials into memory: facts-for-occurrence, key aspects of this dominant male's behavior (with respect to dominance behavior patterns), and key aspects of the spacial and temporal aspects ("in the world"), associated with these dominance behaviors pattern's key content in visual-spacial memory (which he will be able to play back in his mind, when NOT present in the situation where the adult male dominance behavior occurs; i.e. he can "reflect"). BUT, TO DO ALL THIS:
This fourth phase shows the development of some fact/declarative memory (basically the main static features of the dominance act and their relationships to each other, defined) -- this is the declarative/"semantic" aspect of long-term memory he has developed and is developing. Also, some procedural knowledge develops (at the same time) about how to act in response to this dominance expression (especially if his has something "to do" with he, himself): this thoroughly developed, active and automatized response (or set of responses) is the procedural aspect of long-term memory he has gained: this aspect, known as procedural memory.
Also, in the fourth phase FOR THE MOST PART, he has a record-of-incident (episode) memory which is most prominently in the visual-spacial memory which is, in an indirect way, the actual thing he is able to play back key portions of in his mind, just as he sits and thinks about this dominance phenomenon -- given the EPISODIC BUFFER. (Other key aspects [mentioned above] of long-term Memories are also determining the nature of the BUFFER and are "there". ) So, the ability to do this out-of the situation reflection, just described above, relies on (and is delimited by) the content that will be a notable part of his EPISODIC BUFFER, doing some major contextualization of his working memory (entering into it) where further, now more-simple associative learning may now continue to occur, until all the Memories (each and together) are thoroughly refined.
He no doubt will also, through cued thinking (and likely some observation) relate this aspect of his concept of dominance to other aspects at the same conceptual level (and to/with earlier conceptual levels) that are related to shows of dominance. When ALL this (all of the 4 phases and associative learning needed for refinements and concept integration) has occurred (perhaps taking a year), he will be ready to notice other greater patterns BY HAVING a new perceptual shift (that, too, with 4 similar phases) -- these are the core foundational happenings in ontogeny (aka THE proximate directly observable causes of the development of behavior patterns via perceptual shifts) and that which AGAIN allows qualitative NEW learning new ways (using a qualitatively different kind of learning, and also using well-refined aspects from earlier stages): to AGAIN further develop his representation system(s)( aka concept structure), this being related to all major aspects of the Memories and likely mostly connected with through visual-spacial memories, and all the other Memories connected to that AND USED (in the final step of cognizance) BY THE EPISODIC BUFFER; then working memory can work on new "things".
[ Full explication and justification for this approach (and the implications of this approach) can be found via :
and
Shouldn't I provide a shorter (or graduated) way to see what I am all about?
Well, here is an attempt at that :
I (myself) avoid (eschew?) defining anything, I have viewed attention as an aspect of working memory or/and an aspect of the episodic buffer (usually or always both). Both change a lot and frequent (both are very "dynamic") . It is hard to see how attention would not be similarly dynamic (as well as a guiding factor for those 2 memory aspects or types of memory). That being the case, it seems to me it would be well nigh impossible to factor "attention" out. (And "we" should define nothing; the Subject should define all -- as it was with the classical ethologists of the 60s and 70s -- AND AS IS THE CASE WITH ALL TRUE SCIENCES.)
An easy (shorter) way to see my outlook is to read the outline and guidelines I provide for AI people (that about 35 pages long) -- and THAT is also what I believe should be roughly, the as-of-yet outline of good behavioral science: cognitive-developmental human ethology, with (always) an eye to contributing towards an ethogram via that which is ALWAYS founded in the sometime present directly observables (as true proximate causes, along with aspects of the present environment, and simultaneous "innate direction" provided). (This is basically a type of classical ethology, which unfortunately even today's "ethologists" do not know, recall or respect.)
Anyhow, if it is good enough to "mechanize" in AI, AND IS NOT A MODEL OR ANALOGY, but a fair and likely necessary outline of our rather well-defined memory facilities (and capacities) (AKA our differ sorts of Memory) -- all based on the best research -- _AND_ the key "containing system" seen as innately guided qualitative shifts IN/by gaze changes, then things 'noticed' (though often unconscious, and thus better termed "patterned-gazes-noticed") , then defined (conscious) attention, and then new processing (for new representations, and soon, new types/hierarchical levels OF THINKING (all the connected cognition there)). The latter is where BOTH psychology and AI need to make discoveries to progress empirically and systematically (and as any kind of decent science). Anyhow, for a short version of my view,
see:
AND also read the COMMENTS below the item,
Data everythinga.doc 0B (Read "A Human Ethogram ..." sometime rig... :
Deleted research item The research item mentioned here has been deleted
AND
And then, read my major Project description:
--------------------------
AND, finally, for MUCH more (for "everything"), if so desired, see:
(160 pages)
and
read my 326-page collection of essays, everythinga.doc_0B.pdf , UNDER
BY clicking the link to that collection (to everythinga.doc_0B.pdf (and, again, read the new additions, as Comments, under that).
Maybe I am wrong, but I give a clear completely empirical approach to see if I am correct or not. It has been correctly said that I am -- as much as a cognitive developmental ethologist could possibly be -- a "methodological behaviorist"; and, all else cited except such behaviorism (<-- as usually understood) ALSO has clear empirically directly observable foundations, at least at the inception of any major new behavior patterns OR qualitative changes thereof.
Let me try to provide an answer by sharing a relevant essay I wrote to a friend. (This contains that "shortest description".)
Let me answer "What is your definition of 'innate guidance'? " in the only way I ever will answer anything when it comes to a scientific study of human behavior (aka ethology). My answer is I do not define; I never define anything. All is discovered and the Subject (the human) will define what, in any given type of case/circumstance, the innate guidance IS (and what that is like). ("Ditto" for 'learning'.)
This is the only way other ethologists should have things 'defined' . IN FACT: Real and good scientists (in any science) NEVER 'define' anything just with their imagination; no guessing EVER, except just "where to look" -- THEN they find that which is important and worth noting FROM THEIR SUBJECT MATTER).
Everytime (literally) I hear the word "define", I cringe.
NOW: This may not be easy to understand, or understand as I intend, but I have written 500 pages explicating, elaborating, and justifying the following view:
From what I said before: I can only tell you where I would look and hope for the discovery of what is at the INCEPTION of new 'seeing' new things and differently (that then eventually leads to new representation, then to new thinking): IN PARTICULAR: This (coming up) is how I will look for the proximate causes OF the behavioral shifts, in BOTH directly observable overt behavior patterns AND in the associated directly observable aspects of the current environment (and WITH the special sort of associative/discriminative learning that THEN OCCURS; and THAT along with other behaviors -- some developed in just this same type of way in the past, which now function in some similar way to when the behavior was overt, though now covert). I hypothesize, and it is now testable and verifiable (yes or no) with new eye-tracking technology and computer assisted analysis :
That "perceptual shifts" are the overt behavioral patterns aspect(s) WITH the innate guidance that there is/are at the inception of a transition starting a qualitatively different level/stage of representation . Such an inception, of course, includes (for contextualization) what is brought forward from our Memories -- to have the new environmental aspect(s) meaningfully seen . The perceptual shifts will result in finding and using "things" thus discovered (by the organism), BEGINNING with the perceptual shift(s) FOR new elements processed from the environment which allow the key new/additional "ingredients" that need to be added to existing cognitive abilities' contents (the latter, existing already, at a lower level of the hierarchy), to begin to move to the next higher hierarchical level/stage-type behavior (behavior including not only necessary overt aspects, but also existing cognition <-- understood, in important part, by seeing similar perceptual shifts beginning earlier stages; THUS: you have to do investigations longitudinally, beginning just after infancy; you must track the relevant ontogeny).
You will note I use the word WITH very intentionally: that is because the innate guidance (which, in a sense can be seen as manifested in the perceptual shift) IS ALSO OCCURRING SIMULTANEOUSLY WITH new LEARNING, IMMEDIATELY (or in effect, immediately) ALSO INVOLVED at the same time as the perceptual shift occurs. (In short,' innate' and 'learned' occur literally (OR, IN EFFECT) SIMULTANEOUS, TOGETHER -- there is no dualism,