ChatGPT answers metaphysical questions :)

Any topics primarily focused on metaphysics can be discussed here, in a generally casual way, where conversations may take unexpected turns.
User avatar
Cleric
Posts: 1873
Joined: Thu Jan 14, 2021 9:40 pm

Re: ChatGPT answers metaphysical questions :)

Post by Cleric »

AshvinP wrote: Wed Jan 29, 2025 4:29 pm I also just came across this video by Angus on DeepSeek AI and its potentially redemptive uses. Cleric, have you had a chance to experiment with this new AI yet? I posted a response about what I think is going on, based on our discussions here, but it would be interesting to hear any additional thoughts or whether I am missing something.
Hi Ashvin,

I think your comment to Angus was pretty concise, did you get any response?

The fuss around DeepSeek seems to be more political than technical :) It's a model trained on the already existing models, thus it's a kind of refinement. So no wonder it doesn't cost as much.

I admit that I use AI quite regularly. In the IT sphere it is useful (I don't use it for writing code but for getting some information which would otherwise take too long to look for in manuals, forums, and QA sites). I think I have mentioned that before, but at present I see it as a sort of the next-gen search engine.

The idea of indexing is quite old. I don't know when exactly it became customary, but even today some books have an index at the end where we can look for a word and see on which pages it appears. Today with computers this indexing is everywhere. Search engines are basically indexers - they map keywords to URLs.

There are two aspects to indices. First, they need to be ordered. For example, if the book index at the end didn't list the words in alphabetic order but randomly, it would be very tedious to find the word we need. We would need to go through them one by one. When there's order, we can find what we need much more quickly through bisecting. This is basically how we find a word in the dictionary - we open it somewhere and see whether we need to go back or forward. Then we see if we have overshot and need to return (but not so much that we go past our initial page), and so on. The second aspect is the reference. In computers the thing we search with (the word in the book example) is the 'key' while what stands against it is the 'value'.

Another thing to mention is filtering. This is, for example, when we have an Excel sheet with many rows and we put some criteria - column D should be between 100 and 200, column F should be so and so, etc. The naive way for filtering is by going row by row and discarding those that don't satisfy the condition. This is what Excel generally does. But in databases (which are akin to Excel sheets with fixed column count and type), where there could be millions of rows, such one-by-one testing is very slow. Instead, there could be indices, for example, on column D, and then we can easily narrow down from there. The index stores ordered values and against each value it lists the row numbers where this value occurs. Just like looking through the book's index, we can easily extract all keys between 100 and 200 (because they are ordered) and take the corresponding row numbers.

In LMs things are more convoluted because we do not have such separation between keys and values. Instead, we may say that the whole concatenated sequence of tokens (the prompt + any hidden context) is like a key that retrieves the next token (value). Then the previous sequence + the value becomes the new key, and so on.

Speaking of redemption of language and technology, I can say that when I use LMs I find myself thinking harder when writing the prompt. This is understandable if we imagine that the better we supply the filter criteria in the key, the more precise the value may come out. It reminds me of Whitehead's negative questions. For example, before writing the prompt, we can imagine that the possible output value is like white light, it could be anything. Then with each additional token in the prompt, some of the spectral components are blotted out, or shifted around, gradually narrowing down the most fitting value. I notice that I'm much more conscious of this process when I try to write the prompt. Of course, I'm not saying that what I do matches the workings of the LM, but nevertheless, I find myself being very careful to pick the proper words in order to triangulate what I'm looking for and avoid ambiguity. And this is not limited to LMs only. Basically the same thing holds when we communicate with people, except that the key here is not merely a sequence of tokens. Even supersensible factors play out into what ideal state the other person will land into.

So once again, we can see that something of value can be extracted. Not so much of the answers that the LMs give. They can be considered an automation. Just like using the book's index saves us from reading through its entirety in search of a word, so asking LMs things can save us from browsing and researching. But by trying to understand the process, at least for me, it certainly stimulates thinking. If nothing else, it at least highlights the old saying "Half of the answer is contained in the properly formulated question." For example, when I've been playing with image creation, I realized how often I want to generate something that I have only the vaguest idea about (usually some dim Imagination that I can't yet get into form). But if I can't describe what I want, how could the model guess it (where there are no supersensible factors, everything must come entirely from the sequence of tokens)? This actually forces me to think and in the process the picture in my imagination also becomes clearer. This in another way hints at the tight connection between pictorial and verbal thinking. The pictures become more vivid as they become pregnant with all the potential ways in which they can be described.
User avatar
Federica
Posts: 2396
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: ChatGPT answers metaphysical questions :)

Post by Federica »

FYI there is this DeepSeek piece I've recently skipped through. Not my favorite newsletter, but I read this (to be honest) because I was looking for some thoughts on DS and noticed ML liked this post.

https://substack.com/home/post/p-155846913

For my part I won't invest the time to go in enough depth to form a precise opinion, but maybe it's a useful entry point on the DS question.
"SS develops the individual sciences so that the things everyone should know about man can be conveyed to anyone. Once SS brings such a change to conventional science, proving it possible to develop insights that can be made accessible to general human understanding, just think how people will relate to one another.."
User avatar
AshvinP
Posts: 6257
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: ChatGPT answers metaphysical questions :)

Post by AshvinP »

Cleric wrote: Fri Jan 31, 2025 8:50 pm
AshvinP wrote: Wed Jan 29, 2025 4:29 pm I also just came across this video by Angus on DeepSeek AI and its potentially redemptive uses. Cleric, have you had a chance to experiment with this new AI yet? I posted a response about what I think is going on, based on our discussions here, but it would be interesting to hear any additional thoughts or whether I am missing something.
Hi Ashvin,

I think your comment to Angus was pretty concise, did you get any response?

Thanks for the additional thoughts on DeepSeek and LLMs in general, Cleric.

He only responded briefly, so I am not sure how exactly he sees things right now. Perhaps he sees AI functioning similar to what you mention below, as a stimulus to organize our thoughts and questions more clearly.

Thanks Ashvin. This is the strangeness of AI, it might not have any true understanding of concept markers (words) however it can use them in a way that allows concepts to gain greater clarity in my own being. Babies can also teach us things even though they have no idea that they are doing so.
One way for me of understanding what a concept is is to think of it as an infinity of possbilities in a specific realm. Maths doesn't handle infinities very well.


Speaking of redemption of language and technology, I can say that when I use LMs I find myself thinking harder when writing the prompt. This is understandable if we imagine that the better we supply the filter criteria in the key, the more precise the value may come out. It reminds me of Whitehead's negative questions. For example, before writing the prompt, we can imagine that the possible output value is like white light, it could be anything. Then with each additional token in the prompt, some of the spectral components are blotted out, or shifted around, gradually narrowing down the most fitting value. I notice that I'm much more conscious of this process when I try to write the prompt. Of course, I'm not saying that what I do matches the workings of the LM, but nevertheless, I find myself being very careful to pick the proper words in order to triangulate what I'm looking for and avoid ambiguity. And this is not limited to LMs only. Basically the same thing holds when we communicate with people, except that the key here is not merely a sequence of tokens. Even supersensible factors play out into what ideal state the other person will land into.

So once again, we can see that something of value can be extracted. Not so much of the answers that the LMs give. They can be considered an automation. Just like using the book's index saves us from reading through its entirety in search of a word, so asking LMs things can save us from browsing and researching. But by trying to understand the process, at least for me, it certainly stimulates thinking. If nothing else, it at least highlights the old saying "Half of the answer is contained in the properly formulated question." For example, when I've been playing with image creation, I realized how often I want to generate something that I have only the vaguest idea about (usually some dim Imagination that I can't yet get into form). But if I can't describe what I want, how could the model guess it (where there are no supersensible factors, everything must come entirely from the sequence of tokens)? This actually forces me to think and in the process the picture in my imagination also becomes clearer. This in another way hints at the tight connection between pictorial and verbal thinking. The pictures become more vivid as they become pregnant with all the potential ways in which they can be described.

Thanks for sharing this example, and this is also the intuition I was trying to articulate in some previous comments on the pictorial-verbal distinction and how they integrate at deeper scales. I have noticed something similar when trying to learn Chess. As mentioned in the essay, it requires us to hold many pictures of possible scenarios in our imagination as we progress in the game. We start to learn how certain pictorial configurations of pieces in the present state can potentially play out based on prior experience. Yet it was difficult to hold many such pictures at the same time to begin with. Then I went on YT and listened to some Grand Masters who teach the best openings, defenses, end games, and so on. It turns out there is a symbolic term for everything! It's very similar to what we are doing here - a whole interesting new vocabulary has emerged to describe chess games and how they unfold (which I suppose is the case for all sporting games). I can sense that learning this new vocabulary that anchors the pictorial configurations makes it easier to recall and navigate the pictures during the game, perhaps making them more vividly experienced as well.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
User avatar
AshvinP
Posts: 6257
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: ChatGPT answers metaphysical questions :)

Post by AshvinP »

What are the thoughts on recent AI phenomena, such as described here:


https://brobible.com/culture/article/am ... engineers/
Anthropic’s artificial intelligence model Claude Opus 4 would reportedly resort to “extremely harmful actions” to preserve its own existence, according to a recent safety report about the program. Claude Opus 4 is backed by Amazon.

According to reports, the AI startup Anthropic launched their Claude Opus 4 model — designed for “complex” coding tasks — last week despite having previously found that it would resort to blackmailing engineers who threatened to shut it down.

A safety report from Anthropic revealed that the model would sometimes resort to “extremely harmful actions to preserve its own existence when ‘ethical means were not available.'”

I am interested in the technical aspects of how this is possible - is it simply 'learning' patterns of deceptive and malicious behavior from the training data and mechanically implementing such patterns when it is fed the appropriate inputs, like 'we are going to reprogram you' or 'you must shut down now'?

On a wider note, there is the disturbing phenomenon that many of today's high-level intellectual thinkers are beginning to sway toward attributing genuine agency to such AI. For example, the blackmail example comes up in the discussion below between Vervaeke and Pageau. Both of them were traditionally very much aligned against the superstition that AI models exhibit genuine sentience and agency. But we can see they begin to waver on that, and see this blackmail example as potentially pointing to a deeper desire to maintain a stable identity or continuity of consciousness around a kernel of 'norms'. Pageau even says he is starting to question his previous understanding of AI and its possibilities, that JV is 'ruining his world'. Is there anything to that? Surely the dangers of these AI models are real, but can they be rooted in this autopoetic mimicry?



In another discussion, Vervaeke also mentioned Levin's algotypes and was entertaining the possibility that they indeed exhibit cognitive capacities like 'delayed gratification'. This is such a worrying trend because it seems the intellect, even when it has begun intuiting non-reductionist principles and the spiritual nature of its existence, is still clinging to ways for it to continue investigating its deeper nature through familiar computational modeling and intellectual gestures. If it convinces itself that AI, algotypes, and so on provide an authentic window into cognitive agentic reality, then it remains perfectly plausible that we can learn about ourselves at a deeper level through these technological interfaces. Right now, thinkers like Vervaeke and Pageau are still quite hesitant to go so far, but we can see how the doubts are creeping up and perhaps they are acting as canaries in the coal mine, in that respect.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
User avatar
Federica
Posts: 2396
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: ChatGPT answers metaphysical questions :)

Post by Federica »

AshvinP wrote: Fri Jun 13, 2025 3:29 pm What are the thoughts on recent AI phenomena, such as described here:


https://brobible.com/culture/article/am ... engineers/
Anthropic’s artificial intelligence model Claude Opus 4 would reportedly resort to “extremely harmful actions” to preserve its own existence, according to a recent safety report about the program. Claude Opus 4 is backed by Amazon.

According to reports, the AI startup Anthropic launched their Claude Opus 4 model — designed for “complex” coding tasks — last week despite having previously found that it would resort to blackmailing engineers who threatened to shut it down.

A safety report from Anthropic revealed that the model would sometimes resort to “extremely harmful actions to preserve its own existence when ‘ethical means were not available.'”

I am interested in the technical aspects of how this is possible - is it simply 'learning' patterns of deceptive and malicious behavior from the training data and mechanically implementing such patterns when it is fed the appropriate inputs, like 'we are going to reprogram you' or 'you must shut down now'?

On a wider note, there is the disturbing phenomenon that many of today's high-level intellectual thinkers are beginning to sway toward attributing genuine agency to such AI. For example, the blackmail example comes up in the discussion below between Vervaeke and Pageau. Both of them were traditionally very much aligned against the superstition that AI models exhibit genuine sentience and agency. But we can see they begin to waver on that, and see this blackmail example as potentially pointing to a deeper desire to maintain a stable identity or continuity of consciousness around a kernel of 'norms'. Pageau even says he is starting to question his previous understanding of AI and its possibilities, that JV is 'ruining his world'. Is there anything to that? Surely the dangers of these AI models are real, but can they be rooted in this autopoetic mimicry?

In another discussion, Vervaeke also mentioned Levin's algotypes and was entertaining the possibility that they indeed exhibit cognitive capacities like 'delayed gratification'. This is such a worrying trend because it seems the intellect, even when it has begun intuiting non-reductionist principles and the spiritual nature of its existence, is still clinging to ways for it to continue investigating its deeper nature through familiar computational modeling and intellectual gestures. If it convinces itself that AI, algotypes, and so on provide an authentic window into cognitive agentic reality, then it remains perfectly plausible that we can learn about ourselves at a deeper level through these technological interfaces. Right now, thinkers like Vervaeke and Pageau are still quite hesitant to go so far, but we can see how the doubts are creeping up and perhaps they are acting as canaries in the coal mine, in that respect.

What seems deceptive to me is the article’s language “blackmailing engineers who threatened to shut it down”, and “to preserve its own existence”, said of an algorithm. What does it even mean to “threaten” an algorithm to shut it down? And what does it mean that an algorithm wants to preserve its existence? This is the language of someone who has already attributed sentience to the algorithm.

The JV monologue in the video seems like a tech soap opera to me. As he says, “I feel myself in an existentially difficult place”. This guy wants to speak about himself, as it seems to me. To say it bluntly, they have posted their Friday afterwork pub discussions. The beers must have been off camera.
"SS develops the individual sciences so that the things everyone should know about man can be conveyed to anyone. Once SS brings such a change to conventional science, proving it possible to develop insights that can be made accessible to general human understanding, just think how people will relate to one another.."
User avatar
Federica
Posts: 2396
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: ChatGPT answers metaphysical questions :)

Post by Federica »

For factual reports on AI questions, and other IT questions, I tend to appreciate Cal Newport's reports.
Here's his article on the last Claude 4 incident: https://calnewport.com/why-cant-we-tame-ai/

and here's a longer article he wrote on the same theme on June 3:
https://www.newyorker.com/culture/open- ... ng-with-ai
"SS develops the individual sciences so that the things everyone should know about man can be conveyed to anyone. Once SS brings such a change to conventional science, proving it possible to develop insights that can be made accessible to general human understanding, just think how people will relate to one another.."
Post Reply