The issue is, when we speak about the VR, the terms "veridical" is not directly applicable. In a VR we manipulate abstract objects that behave following certain patterns and regularities according to the underlying algorithms. You can launch a helicopter in the Microsoft flight simulator and control it quite well following the control rules and regularities, however, all you see and do in the VR does not directly represent any base reality on the processor level. Yet there is still functional correspondence between these two layers of reality - the "base" ("processor") and the "screen" (VR), and this correspondence is defined by the algorithm. The regularities we see in the VR (behavior of the helicopter) are not the same as the regularities in the base reality, because the regularities of the base reality are the rules of the bit-level processing in the processor that have nothing to do with the behavior of the helicopter. Yet, the regularities in the VR are defined by and are the consequences of the regularities of the bit-level processing rules and of the algorithm.Jim Cross wrote: ↑Thu Aug 19, 2021 5:43 pm Nobody is arguing that consciousness is not a representation of external reality. Hoffman's argument is that it is not a veridical representation. My argument is it is not that simple. There are aspects that are veridical and aspects that are not. Especially in the aspects that science is most concerned about - measurement and relationships - it is likely quite veridical; otherwise, as the author argues, we would never be able to launch a rocket and put a rover on Mars. We would never be to adapt to prism glasses that turns everything upside down. That are regularities in the world that we can perceive at some level is a requirement for being able to interact with the world.
This is again a misunderstanding. DH's model of CA network is currently very primitive and of course does not include those higher-level learning and memory mechanisms. He is not even trying to explain human consciousness with his current model, this is not his intent. Currently he is trying to create a model of the CA network that would result in a "virtual reality" matching the lowest quantum level regularities of the observable phenomena. In other words, he is trying to show that the laws of physics (QM, SR/GR) can be modeled as a higher-level behavior of the agent's network. There is a long way to go to from that to a model any kind of even primitive beetle-level consciousness.Hoffman's network of conscious agents is derived from his PDA loop so we need to ask first if that is a useful way of picturing consciousness.
Well, it turns out it probably isn't.
I made my own arguments against it in my own posts. The problem is that it omits learning and memory from the picture. As I wrote, we are not like a beetle that sees brown and decides to mate. We can provide context to any external world stimuli through memory and learning. Memory and learning occur at the individual level in real time, not the species level in evolutionary time.
A closely related criticism is from this article that I linked in the other thread.
http://philsci-archive.pitt.edu/15846/1/article.pdf
The cue-driven agent is like the beetle that sees brown and decides to mate. More complex organisms have a more complex decision making process that takes into account memory and learning, that can test the environment through its own actions and correct perceptions or override them. But what are they testing against? They are testing against agent-independent, objective world that Hoffman tries to replace with his networks of conscious agents that is derived from his PDA loop which is shown to be reflect "an extremely impoverished picture of the informational connections that hold between agent and world."Here I examine the game-theoretic version of this skeptical line of argument developed by Donald Hoffman and his colleagues. I show that their argument only works under an extremely impoverished picture of the informational connections that hold between agent and
world. In particular, it only works for cue-driven agents, in Kim Sterelny’s sense. In cases in which the agents’s understanding of what is useful results from combining pieces of information that reach them in different ways, and that complement one another (i.e., that are synergistic), maximizing usefulness involves construing first a picture of agent-independent, objective matters of fact.
But the point is, even if he would include learning and memory in his evolutionary simulations, there is no way an observer with primitive conscious capacities (memory and simple learning capabilities) would be able to figure out the regularities of the underlying hidden CA network just by observing the regularities in the sense perception data. There is still an epistemological gap that prevents primitive conscious organisms from veridically accessing the regularities, qualities and quantities of the "base reality". Even the most intelligent monkey would never be able to figure out the Schrodinger equation. The breakthrough into the base reality can only happen when conscious beings achieve the level of cognition that allows them to apply metaphysical and/or mathematical models and understand that the rules of the base reality can be radically different from the rules and regularities of the reality directly perceived by the senses.