Actually after I made my comment I reread your comment and thought that you might be agreeing with me more than I originally thought.JustinG wrote: ↑Mon Sep 13, 2021 2:08 amI thought I was more or less in agreement with your prior post Jim (though maybe I didn't read it closely enough), so it will be interesting to disentangle our positions and see where they diverge.Jim Cross wrote: ↑Sun Sep 12, 2021 11:56 amI thought I was pretty clear exactly how pain on touching a stove creates a memory and learning. Surely, you are not suggesting memory and learning have no evolutionary value. In the same way, pleasurable experiences generate positive memories and reinforce behaviors that generate them. This is basic behaviorism and directly tied to neuro-transmitters.JustinG wrote: ↑Sun Sep 12, 2021 3:25 am Hi Steve and Jim, thanks for the comments. My response:
The paper aimed to have a primarily biological focus. So I didn't want to consume too many words discussing philosophical issues related to causality or the definition of consciousness, important as these issues are. The reference to feelings is to phenomenal experience throughout, and not to Chalmer's notion of psychological consciousness.
From the perspective of Darwinian evolutionary biology, if feelings do not do anything (i.e. if they have no causal efficacy), then there is no evolutionary reason why, for example, touching a stove is associated with feelings of pain or eating is associated with feelings of pleasure. From a Darwinian perspective, if feelings do not have causal efficacy then it is equally likely that touching a stove be associated with ecstatic pleasure or eating be associated with excruciating pain. As the latter is not the case, Darwinian theory therefore implies that feelings have physical effects in the world (i.e. they have causal efficacy).
It isn't really hard to see how this evolved.
Simple reflex circuits - touch a stove pull back - are only of limited value. It wouldn't stop an organism from touching a stove again and again. It addresses the immediate damage done to the hand but not future damage. For the organism to recognize the more general case it must create a memory of a stove, or even better the abstract case of hot objects in general, that it can combine together to avoid touching stoves in the future. Consciousness is directly tied to this learning because it requires integrating the cognitive recognition of a hot stove with the memory of the pain generated when touching it.
I think the divergence may be in how we each conceive of reductive explanation. My discussion of this in the paper (pp. 363 -364) is as follows:
So, in terms of your stove example, in your explanation the actual subjective sensation of pain does not seem to have any relevance to the creation of the memory and the learning. As I am reading it, in your explanation the actual subjective sensation is not actually doing any work, because all of the behaviour can be accounted for in terms of lower-level nonconscious processes.A hallmark of physical science has been the explanation of higher-level processes in terms of lower level processes. As David Chalmers puts it, in reductive explanation an appropriate account of lower-level processes results in the explanation of the higher-level phenomenon falling out. Or, in more technical terms, a natural phenomenon is reductively explainable in terms of some low-level properties when it is logically supervenient on those properties. In terms of the physiology of the human body, this means that, in principle, an explanation at the level of physical and chemical processes in the body as determined by the laws of physics and chemistry would explain all movements of the body. There is no need to infer any contribution from higher-level subjective mental states.
In my view, no. At least not in living organisms.can be accounted for in terms of lower-level nonconscious processes.
First, there is empirical information that links behaviors associated with consciousness with the ability for learning. This is the Jablonka-Ginsburg thesis that learning is a marker for consciousness.
There are the decades of operant conditioning research on learning and positive and negative reinforcements.
There is also the somewhat trivial observation that no complex learning takes place from unconscious inputs.
https://broadspeculations.com/2020/01/1 ... certainty/Bernard Baars, originator the global workplace theory of consciousness, notes that “there appears to be no robust evidence so far for long-term learning of unconscious input” and the “evidence for learning of conscious episodes is very strong.” He also writes: “Consciousness is also involved with skill acquisition. As predicted by the hypothesis, novel skills, which are typically more conscious, activate large regions of cortex, but after automaticity due to practice, the identical task tends to activate only restricted regions.”
There is in my view a very good evolutionary explanation for this.
It gets precisely at the question of whether the brain is a computer or a simulator.
https://broadspeculations.com/2021/04/2 ... ypothesis/Compared to actual computers, the brain and nervous systems must make the best with a relatively small amount of energy and a relatively slow computational speed. In simple organisms those limitations may not be fatal. However, the evolution of greater adaptive capability, the integration of more sensory data, and the development of broader repertoire of behaviors would eventually hit a computational barrier. The brain could not compute quickly enough to provide an selection advantage if it relied solely on a computational approach. The evolutionary response would be development of a simulation on top of a computational base. Unsurprisingly , our consciousness feels occasionally exactly like a simulation, although for the most part we think the simulation is real.
In other words, consciousness - the generation of the simulation - is only way organisms can do complex learning.