Questions related to Neurophilosophy
Are there studies comparing the results of former and current addicts (even nicotine addicts) in the Iowa Gambling Task? I've debated the topic with my peers and we're searching for any researches of that kind. Thanks.
It seems to me that the power of the mechanistic account of explanation (Craver; Bechtel; Glennan) is to take apart individual components and see how they contribute to a given behaviour. In my opinion, the significant focus still is on the activities of individual components. The concept of the mechanistic organization should make mechanical models more holistic. However, in my opinion, the mechanistic organization merely focus on the spatial (i.e. proximity and distance) and temporal (i.e. different times of activation) co-ordination of mechanistic components. If this is the case, I do not see why the mechanistic organization should imply that mechanisms, for example, in neuroscience, are holistic. The mechanistic organization does not include a concept such as "way of working" (Bergeron, 2007) which points out the comprehensive way of cooperating of a set of components abstracting away from the activity of individual components. For instance, Burnston (2019) suggest that for studying how a set of brain regions (i.e. a brain network) underlies a specific cognitive function, we may look at the "brain frequency" (alpha: 8-13 Hz; beta, 18-25; theta: 3.5-7 Hz; delta: 0.5-3.5 Hz, and gamma: 30-70 Hz) of the whole network. That is holistic! Is it my impression or is there no an account of the mechanistic organization (in neo-mechanical philosophy) that takes into consideration "ways of working" together of every mechanistic component both intra-level and inter-level?
Please, let me know if you think I am wrong, and where may I read a substantially holistic account of the mechanistic organization.
What happen in our brains when we prejudge about someone or something? Our prejudices also affect our decision making mechanism, how does it occur among neurons?
And if you know research that used fMRI, you can share with us.
If we focus too much on neurogenetic diseases (instead of on more positive conditions) we are in risk of increasing the discrimination against some groups instead of valorize the potential of them: The same neural and genetic circuits that generate disease and unwanted behaviors can in certain circumstances be tuned to generate precious abilities and positive behaviors.
Should neurogenetic/EEG/neuroimaging be more balanced in this regard to avoid to increase discrimination against some conditions, instead of stimulate opportunities?
A linear summate and fire default model of post-synaptic integration would not require inputting axons to synapse at any particular place within the dendritic tree of the receiving neuron. However, if post-synaptic integration was non-linear and possibly pattern sensitive it might be essential that the site of each synapse bore a relation to the meaning or significance of a specific input signal. Thus for a multimodal high-level sensory neuron signals originating from different primary cortices might arrive at different domains of the dendritic tree.
I would be very interested to know whether there is already useful information on this topic or whether anyone hopes to collect such information.
Even better if these address the issue in special sciences, and especially in neuroscience and psychology.
There are anecdotes about people with a special talent who can move a slide rule (a slipstick) in their head and get a result which they had not had before.
I doubt this for a number of reasons:
Our imagination is not an objective counterpart to us. In other words the imaginary slide rule might behave in a strange way – it could be “made of rubber” and we could not be sure that it is not behaving in a strange way.
There is a difference to imaginary chess games: for chess there are rules to prevent that some constellations of the figures are sensible constellations. Rules of this kind are not available in the case of a slide rule.
Special talent: I do believe that there are people who stored an immense number of snapshots of constellations and situations. But to call up these images is not activating a process in the head that will reveal new constellations, is it?
What about thought experiments? They can be very creative indeed. But I think in this case we apply many laws of nature that limit the number of possible outcomes.
For non-historians: here’s a picture of a slide rule: http://en.wikipedia.org/wiki/Slide_rule
While having the concept of Self as opposed to others or to the environment seems good for focusing the organism functions on survivability and on DNA spreading, is there any evidence that consciousness has an evolutionary advantage?
To elaborate further, here I'm talking about consciousness as the first person experience. And for "first person experience" I'm not talking about "experience OF first person": conversly, I'm specifically addressing the "experience IN first person MODALITY" (as a corollary to this question, I'm proposing that the word "consciousness" refers to too many concepts). In this view, I consider self-consciousness "experience of first person in first person modality".
If we embrace the assumption that consciousness is always consciousness of something, we still lack an explanation for the nature and the purpose ("what is/what's for" rather than "how is it") of the first person experience, and as such why evolution favored it.
In a lot of other Q/A about self and consciousness people are talking about consctructs that may function even without consciousness. Two examples:
-self: a neural network comprising semantic concepts about the world could very well include the concept of self as a non-other or non-environment, or even a concept of self as an independent organism with such and such features; why do we need consciousness to conceptualize it? Would a machine decoding all the concepts coming across the node of (or the distributed knowledge about) self be considered conscious? We do not have to attribute consciousness to the machine to explain the machine processing its concept of self.
-thinking: processing is certainly different from consciously elaborate something, as all the studies on automatic and subconscious processing show. On the other hand, this point address the free will problem: when we consciously elaborate something, does it mean we are voluntarly doing so? Or are we just experiencing a first person "show" of something already happened subconsciously (as Libet's studies suggest)? Without touching upon the ad infinitum regression problems, this poses the question if consciousness is useful without free will: if the conscious experience is just a screen on which things are projected, no free will is needed and thus what's the whole point of consciousness? As such, do we also need free will for accepting consciousness? If we are working with the least number of assumptions, it seems unlikely the we can accept consciousness.
It seems to me that the general attitude of cognitive theories in a biological information processing/computational theory of mind framework is to try to explain everything without putting consciousness in the equation. And indeed it seems to me that no one is actually putting consciousness in the equation, when explaining cognition or behaviour (at least in modern times).
All in all, it seems to me that all the above reasonings bring the suggestion that consciousness is not needed and has no evolutionary advantage over automatic non-conscious entities. Or that we should make more and more assumptions (such as accepting free will) to make sense of consciousness.
I think that asking why we have consciousness could lead us to understand it better.