Glitch, potentially, from new work The Circular Life and Death of Images (2025)

Why are we trying to make robots human at all?

After reading Hayles’ (2017) Unthought, I kept wondering why the technicians were hellbent on recreating human thinking in machines when the machines were evidently so good at machine cognition. Whereas, we humans are slowed down considerably by all the other stuff that goes on when we do our thinking – such as digesting, managing and navigating feelings, or coping with tiredness. In fact, thinking entails all of that. And it is all these other things that make us human – the slowed-down cognition is what has led to very specific human creative and ‘artistic’ outcomes. This does not mean valuing the human more than any other critter/being. In fact, it suggests the opposite: accepting what we are with a degree of humility. Why not leave the machines to do the sort of cognising they are best placed to do, while we get on, perhaps with their support and even entanglement, doing what we do well? A question that speaks to Chollet’s Tweet from 2022 in the image below?

From Some favourite screenshots (Chollet is also referenced in the Apple research below)


I hesitate to use the words ’embodied’ in relation to those other processes because the thinking we do with our whole systems is always embodied in vessels that smell and leak, regardless of whether we focus in our narratives on the brain or the stomach. I have never been convinced that machines were thinking anything like I do, although, of course, they simulate doing so and appear to have some level of emergent qualities. This question about why we are trying to ‘remake’ actual humans with machines is not new, and others have been asking it for aeons. It’s likely an inevitable byproduct of some long ago formed evolutionary trait that allows us to care for each other and/or recognise danger. In which case, perhaps that old line, ‘he made us in his image’ reveals a form of projection (we made God in ours) related to our evolutionary storytelling capacities.

I too have addressed this topic in previous scribbles, namely after reading Buckner’s From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence, and wondering why he was recommending machines emulate human empathy (I agree with his prognosis about psychopathy but with that in mind, don’t we humans need to reconnect with empathy first?). Then I wrote:

…. [the] argument that technologists take heed of de Grouchy’s writing when considering how and why to implement a rational machine’s ‘moral compass’ is compelling. He [Buckner] reminds developers “…proceeding in the top-heavy, solipsistic tradition can have serious costs: by creating agents with social and moral reasoning systems that are not grounded in affective underpinnings, we create a hollow simulacrum of sociality which, similar to the grounding problems faced by GOFAI [good old-fashioned AI] in other areas of inquiry, is more akin to the reasoning of psychopaths” (2023; 311). Elsewhere he also says, “where we call DNNs [deep neural networks] black boxes, it may be better to describe them as unflattering mirrors, uncomfortably reflecting some of our own foibles that we might be reluctant to confront” (Ibid; 89)1. Juxtaposing these two thoughts leads to the following questions: are we creating a hollow simulacrum of sociality for humans themselves? And if so, why? And what can we do to address that reality? (From a proposal for work, semi abandoned but will probably end up combined with another newer project).

The following recently published articles address these related questions in different ways.

  • The Loom and the Weavers by Jac Mullen (2025) is the first in a proposed series, which I linked to via The Hinternet, which begins a process that is long over-due; critiquing and dismantling the blanket term Artificial Intelligence. It identifies the differences between pattern recognition and something more sophisticated exemplified in the Large Language Models released to the public in 2022/23, and described as having “persistent, relational personae, baptized into language, bearing the marks of cognitive metabolism—memory, attention, agency”.

Both threads are worth reading and thinking about in conjunction.

Elsewhere, my new obsession is Günther Anders, who wrote about the human’s total inability/improbability of existing outside artifice. I am no longer certain it is useful to think in terms of there being a ‘pre-verbal’ stage for modern humans (see my project ://LAMELLA) – I have this feeling that many would disagree with this, but perhaps Anders’ argument helps my cause. I may get back to that with more detail at some point in the future… Here’s an LLMs take in the meantime: “If humans are always already embedded in artifice, then the human/machine distinction becomes less about natural versus artificial and more about different forms of technological entanglement. This could support your skepticism about recreating human thinking – perhaps we should focus on understanding how different forms of cognition (human, machine, hybrid) can productively coexist rather than trying to make machines think like humans. […All suggesting] the anthropomorphic AI project is fundamentally misguided. Not because machines can’t be powerful, but because trying to make them human-like both misunderstands what makes human cognition valuable and prevents us from developing more productive human-machine relationships.

Refs:

Buckner, C.J. (2023) From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. Oxford University Press. Available at: https://doi.org/10.1093/oso/9780197653302.001.0001.

François Chollet @fchollet ‘Human intelligence is a poor metaphor for what “AI” is doing. AI displays essentially none of the properties of human cognition, and in reverse, most of the useful properties of modern AI are not found in humans.’, Twitter. Available at: https://twitter.com/fchollet/status/1573752180720312320 (Accessed: 25 September 2022).

Clarke, T.J. (2025) When AI Reasoning Hits the Wall: The Collapse of Large Reasoning Models | by Jason T Clark | Craine Operators Blog | Jun, 2025 | Medium (no date). Available at: https://medium.com/craine-operators-blog/when-ai-reasoning-hits-the-wall-the-collapse-of-large-reasoning-models-03a3dee12a36 (Accessed: 11 June 2025).

Hayles, N.K. (2017) Unthought: the power of the cognitive nonconscious. Chicago (Ill.): University of Chicago press.

Mullen, J. (2025) ‘The Loom and the Weavers’, After Literacy, 5 June. Available at: https://jacmullen.substack.com/p/the-loom-and-the-weavers-part-one (Accessed: 11 June 2025).

Müller, C. (2016) ‘Anders, Günther’, Critical Posthumanism Network, 23 April. Available at: https://criticalposthumanism.net/anders-gunther/ (Accessed: 11 June 2025).

Smith-Ruiu, J. (2025) The Hinternet | Justin Smith-Ruiu | Substack. Available at: https://www.the-hinternet.com/ (Accessed: 11 June 2025).

Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. Apple Machine Learning Research. https://machinelearning.apple.com/research/illusion-of-thinking

Smith-Ruiu, J. (2025) The Hinternet | Justin Smith-Ruiu | Substack. Available at: https://www.the-hinternet.com/ (Accessed: 11 June 2025).

Discover more from Sketchbook

Subscribe now to keep reading and get access to the full archive.

Continue reading