Squidgers wrote: ↑Wed Jul 14, 2021 9:30 pm
This sounds similar to a mix between Leibniz's principle of compossibility, which in short states that, what exists defines what can exist (or, nothing can exist that contradicts what does exist), and scale invariance. Do these two ideas come in to your model at all?
I don't assume LNC as an axiom. Process ontology has complex and dynamic relation with contradition. Process ontology does not avoid relativism.
How is the conclusion drawn from this theory that "the world is reason itself or created by a rational agent"? - Is this what you think? How is "reason" in this context defined?
Ratio and relation are synonyms, and 'reason' refers also to causality.
Rational agents participate in participatory creation, but reasoning also leads - empirically! - to
computational irredicubility. Reductionism is thus denied, at least in the strong form of "only cause".
Reductionism is a mereological concept (ie. a part-whole relation). Reasoning causality mereologically we can analyze at least following forms of reasoning/causality: 1) from part to whole (aka "bottom up"); 2) from whole to part (aka "top down"), 3) peer-to-peer (whole-to-whole and part-to-part).
This is similar to the point of the OP - how can BK's theories (or yours in this case) be rationally/mathematically integrated with the truths of Quantum theories?
What are the truths of quantum theories? That's not an easy question. QT has at least mathematical and empirical aspects. From what I've gathered, many people are unhappy with the classical mathematical foundation of QT, and looking for better mathematical foundation, which would be coherent with the empirical aspect of QT. What are the empirical truths of QT beyond measurement problem, if such can be stated?
Again, Wolfram's work helps to bring some clarity, we can say that there are empirical and mathematical truths of QT which are computationally reducible. And computationally reducible basic relations are 1) repetition and 2) nesting. Let's also keep on mind that computation can be reduced to very simple rules of string manipulation, such as Schönfinkel's combinators S and K, computation is in that sense more general and powerful than number theories.
My own foundational approach deals with temporal issues of CPT-symmetry (with palindromic time, cf. delayed choice and quantum eraser) and the potential implications of 'quantum time' to more general "quantum coherent" theory of computation durations. Mereology of Bergson durations, which are open intervals, externally towards both past and future, and internally towards increasing resolution.
How does Coherence provide an error-free foundation for philosophy, mathematics, and the sciences - excluding no phenomena without warrant?
Coherence with empirical facts of computational irreducibility and hence non-determinism does not provide error-free foundation. Which is a good thing, as with ability to make errors comes also ability to experience nice surprises. Fully deterministic universes would be totally uninteresting, and hence not worth living and experiencing. If you like, in that sense non-determinism can be considered also a rational ethical choice by experiencing agents. Coherence as such is the ethical choice of consensus seeking and communicability between perspectival multinatures.
Any explanatory theory must be able to explain itself and why it works as an explanatory theory for humans. It might even serve as a translation mechanism between humans and any possible intelligent non-human life. An ultimate explanatory theory should be able to tell us why it is right and why other attempts at explanation are not, and where any limits lie and how they can be overcome. If this were not the case, then we wouldn’t be able to know any ultimate explanatory theory. It might just be another efficient and quaint tool, but no definitive answer about the nature of life, the universe, and everything. If we want a definitive answer, we must demand that a theory explain itself without becoming trivially circular. An ultimate theory must explain itself and how we came to it—there is nothing wrong with that. It’s just the manner of the explanation that we must take issue with. In employing his criteria for an ultimate theory, Rescher uses formal logic to argue that the combination of his four requirements for an ultimate theory lead to circularity, thereby violating the condition that no fact can explain itself. But this is a mistake: facts cannot explain themselves, but principles of explanation and explanatory systems must explain themselves. To mix facts with the systems those facts are grounded in and defined by is a surefire way to find absurdity where there is none.
[/quote]
"Ultimate, final and definitive" explanatory theory of reductionistic determinism is rejected by ethical grounds, as well as standard epistemological humility. The self-explanation is the
choice to keep on experiencing and learning, evolving and loving.