rezam06 wrote: ↑Mon Apr 24, 2023 7:21 am
Hello Federica
May I know where you depart from BK's idealism? (Or what's your comment on BK's idealism?).
Thank you
Hello Rezam, sure. Thanks for your question!
In general terms, where I depart from AI - which stands for analytic idealism in this case
- is in its methodological approach to the understanding of reality, to start with.
Basically, our common goal is to understand experience,
our conscious experience. Still, AI tells us that we should
not start by the consideration of just that: what happens when we experience the world. Instead, it tells us that we should start with positing an ontological prime, namely, that reality is of ideal nature.
Incidentally, I know reality is, indeed, of ideal nature, but setting this hypothesis
as the starting point already sounds like we are asked to accept a modus operandi that somebody else seems to have arbitrarily decided for us, although it's
our direct experience we want to grasp. Why should we start with ontology? “Because one ontic prime is better than two, it’s more parsimonious” BK says. “Because we need to define an abstract rock bottom hypothesis, in order to have a determined starting point for the reasoning”.
But our goal is to enquiry
how we cognize reality, something quite concrete. Can we not consider that experience directly? “No - AI tells us - we need
to pin down the reasoning somewhere, so we pick a piece of the reasoning, and we make it our postulate of choice, start there, and see if it stands up reasonably. That’s how philosophy works.”
In other words, we are told that the reasoning itself is actually anchored to nothing, it would gladly fly around in the world of abstractions, if it was for it. Therefore, we need to pin it down. We do
as if one link in the chain of reasoning was real/true. We don’t know if it’s the case, because the whole thing is not our direct experience, it flies in the air (
it’s abstract), but we just postulate like so. In this way we have a”rock bottom” to “rely on”.
To me, this does not sound like a very good start. It sounds like we are compromising a lot before we even get started. So I prefer to depart from such an approach
For me, it’s as if you are an investor, let's say. You have funds. You are ready to go out there, and make your choices to place your stacks in various markets. But you are told “Please sit down, now let’s play Monopoly. Here's your game piece, here are the game rules, let’s play. You: “But I have real money, I don’t want to play in the abstract, according to the fictional rules of this table game, I want to get in touch with the real world, and make real placements, in the real market, with real money, not with these colorful tokens!" In the same way, AI plays a game of "let’s postulate this, then let’s submit that, and let’s treat this possible objection like so" etc. etc. (which is probably true for other philosophical approaches as well).
But the thing is, we are interested in
our own process of cognition of reality and we have a sense that matter can't explain consciousness. Therefore reality should be of ideal nature, ideas should be the backbone of reality. But let's notice at the same time that ideas are also "inside" our process of experience, obviously, not only "outside". In other words, we are considering a nature of reality that seems to exist both inside and outside our human perspective. We are evidently entangled in it in a complex way. Yet AI tells us: no worries, sit back, no need to look at things from first-person perspective, you can conveniently take yourself out of the equation and play around with hypotheses and statements, as with colorful tokens. You can try various configurations, from various hypotetical starting points, and see what “makes most sense”.
As if it was possible to treat the whole question like someone else’s question, like a table game. But how can we try to grasp experience (the connection of perceptions with ideas) by looking at the question in the hypothetical capacity of an external observer, like a game we can run in various ways? It seems clear that as soon as
we think about it, we are having an experience, we are changing our flow of experience from within. We are activating and moving that same 'thing' that we are trying to explain. So it's impossible to treat it as a fidget toy that we can manipulate at will, without being ourselves changed in the process. Clearly our experience changes in the process of trying to explain experience, and so we can't 'sit back' and play around with a 'fidget-model' of it.
This is just an initial idea of where I depart from AI. If you were looking at some more specific objections, actually my very first posts on this forum were
objections to BK’s model. How do you depart from analytic idealism?