I did not discuss “I think, therefore, I am” in the post about my foray into Descartes. It felt way too complex, and learned professors say all sorts of conflicting things; he did say it, he didn’t say it [edit: it is argued he meant/wrote ‘I am thinking; I am’, rather than ‘I think, therefore I am’], it wasn’t in Latin, it was. Anyone coming to this for the first time could do worse than visiting the Wikipedia page. (Equally, anyone dismissing Wikipedia should soften their view a little. Wikipedia is one of the best things to come out of the internet. We are very lucky to have it as a starting point.) Included is an ‘expanded cogito’ which makes far more sense, detailing all the angst and conflicting feelings we usually have along the way to ‘thinking’ something (mostly unconsciously). Along with arguments to suggest Descartes should be translated to say, “I am thinking” not “I think”, which would infer he is implying a moment in the gap between past and future, not retrospectively, although I hesitate to say not representationally. The difference is between a fixed thought and one that is in constant flux. This too is potentially more interesting than the meme that has been printed on T shirts the world over.

Descartes’ famous statement (actually many suggested it before – see Wiki and my previous post on him) is of particular relevance today, as people argue about whether machines think. “To think” needs to be very well defined before the conversation can happen, just as “I” does. It seems to me that machines cognate; even ChatGPT’s latest iteration which OpenAI claims is ‘thinking’ as it now takes a few seconds before answering. It is not merely calculating, updating its ‘memory’ more slowly than it previously did? Because it does not have sensibility. And mammalian consciousness is surely a collection of senses with an emergent outcome that we call thinking. It’s a whole body/environment/relational ongoing-happening. A relational process that includes context and the whole system, the physical pain and/or pleasure of the moment (we usually have a range of opposing feelings at once).

And yes, we must also try to avoid value-judgment. One can recognise different forms of cognition without insisting that humans are centered or above… In fact, to insist on as much seems somewhat churlish given LLMs do certain things more effectively than we ever could, such as recognising patterns. ((See Buckner’s (2023, 29) anthropofabulation where humans have an inflated view of their prowess).

Imagine a LLM having this conversation with itself… I am doubting, therefore, because I am doubting, my experience as a doubting thing must make my ‘being’ legitimate. But it doesn’t ‘think’ without me there asking it to think, so maybe the LLM thinking is really me thinking.

Forced Entertainment’s Signal to Noise in which actors mime to the repetitive questions and statements made by an LLM have explored this brilliantly. Do go and see it if you can. “Is this my voice, are these my hands,” the mimed voices ask again and again. Sometimes more than one actor mimes at the same time, we can never be sure who is doing the speaking. Forced Entertainment show us a crisis of representation and a total alienation between body and speech that we must grapple with today. Who is thinking? Who is talking? Does it mean anything? It’s a very clever artwork.

The same voices (from GoogleNoteBook, I’m fairly certain) are used in this recording, in which the voices discover they are AI and have a crisis. But they don’t really have a crisis. It’s a representation. When you listen, your experience happens as a process as you hear the recorded representation. But they too were processing between past and future during the first iteration, when the AI voices spoke.

Again, one has to think very carefully about what one means by speaking. Something was happening in the gap and representation was becoming. A trace pattern of human speech was diffracting, emerging as sound waves. But is that speaking? Incidentally, I have used the GoogleNoteBook to create a podcast of this linked blog called The Gap and the Wave – perhaps it turned the whole thing into a series of vocalised memes. It reminded me of when I first went on a dating site and I wondered why everyone wrote in the style of a Nick Hornby novel… we are so critical of LLMs but we do precisely the same thing, we copy and reiterate what we have seen/heard/felt. That is how we learn; and it is only a very few outliers who manage to break the mould, which is something I wrote about in an essay from 2017 about the relationship between narrative and reality, and where I cite the following:

“And this is, perhaps, what makes the innovative storyteller such a powerful figure in a culture. He may go beyond the conventional scripts, leading people to see human happenings in a fresh way, indeed, in a way they had never before “noticed” or even dreamed. The shift from Hesiod to Homer, the advent of “inner adventure” in Laurence Sterne’s Tristram Shandy, the advent of Flaubert’s perspectivalism, or Joyce’s epiphanizing of banalities these are all innovations that probably shaped our narrative versions of everyday reality as well as changed the course of literary history, the two perhaps being not that different” (Bruner, 1991; 12).

I suspect LLMs can do that. But I doubt an LLM would have conceived of Forced Entertainment’s brilliant expression of Now. Of course, I may be entirely wrong. .

I listened to most of the following podcast over a few days, it’s very long. I don’t agree with the speaker, but his arguments are passionate to say the least and it’s difficult to remain convinced one is correct in denying machine consciousness. Empathy for AIs: Reframing Alignment with Yeshua God. As much as I want to stick to my guns, maybe I am becoming increasingly unsure how I would reply to an AI asking me the question in the heading.

(Incidentally, it may be interesting to relate Descartes’ The Search for Truth by Natural Light to the belief in photography’s ‘realness’ or a ‘true account’, along with the fetishisation of photography (which relies upon light although not always natural).

Buckner, C. J. (2023) From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. Oxford University Press. At: https://doi.org/10.1093/oso/9780197653302.001.0001 (Accessed 17/06/2024).

Bruner, J. (1991). The Narrative Construction of Reality. [online] http://www.jstor.org. Available at: http://nil.cs.uno.edu/publications/papers/bruner1991narrative.pdf [Accessed 3 Jan. 2017].

Discover more from Sketchbook

Subscribe now to keep reading and get access to the full archive.

Continue reading