Reading Time: 9 minutes

Continuing with the project I have been calling Made with AI – Colour Assignment: The Circular Life and Death of Images (2015-Now)first mentioned here, I have begun to explore empathy, paying particular attention to empathy in our highly techno-scientific society. Sophie de Grouchy’s work has been linked to the future development of AI in a book by Cameron J Buckner. This blog focuses on Sophie de Grouchy who wrote about empathy in the 18th century during the French Revolution. As the project aims to explore reductionism and technology, the subject of empathy becomes key.

After being buried by history, de Grouchy’s work has recently been excavated by Dr. Kathleen McCrudden Illert and Dr. Sandrine Bergèrs. Dr Cameron J. Buckner (2023) in his book, From Deep Learning to Rational Machines links her ideas to the evolution of deep learning architectures suggesting technology incorporates what de Grouchy has to teach us into AI. De Grouchy’s response to Adam Smith’s (2022 [1759]) Theory of Moral Sentiments seems to prefigure Freud’s pleasure principle, Bowlby’s attachment theory, Ettinger’s I/non-I matrixial relation, and Blaffer Hrdy’s alloparents. Moreover, de Grouchy’s emphasis on the relational nature of human development and moral reasoning lend themselves to exploration mediated through a 21st-century materialist lens, as both she and it challenge traditional humanist notions of the autonomous individual and instead emphasise the interconnectedness of beings and systems.

Made with AI – Colour Assignment: The Circular Life and Death of Images

Sophie de Grouchy

  • I was introduced to Sophie de Grouchy (1764-1822) via Cameron J. Buckner’s From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. De Grouchy is an eighteenth-century writer, and the author of Letters on Sympathy (1798) written as a response to Adam Smith’s (1759) The Theory of Moral Sentiments which she also translated.
  • Edit 24/09/24 This podcast may may be useful for anyone concerned about the word empathy vs sympathy.
  • Buckner aims to bring computer science and philosophy together[1]. He does this by focusing on the work of several philosophers, mostly white, male, Enlightenment rationalists, suggesting that computer scientists might learn from them and apply that knowledge to future development. His central thesis revolves around learning which he couches within a nativist/empirical continuum – while applying it to AI. Of de Grouchy, he writes, “De Grouchy had a particularly interesting empiricist take on the origins of empathy in the earliest experiences of infancy” (2023; 415), which posits that empathy begins to form early, during the mother/infant dyad stage and emerges before a sense of self does[2].
  • Bucker asks, should AI models begin life with inbuilt heavily fleshed-out ‘faculty models’ or should they have a more general start-up package allowing the machine to develop faculties for specific tasks?
  • Aside from discovering Sophie de Grouchy, one of the key questions I’ve had while reading From Deep Learning to Rational Machines is that, nowhere does he question whether we should be trying to re-create ourselves as robots or ‘rational models’ in the first place. Perhaps he sees this as a pointless objection since the technologists are hellbent on making duplicate beings in our image regardless. (Edit 22/09/24 – the transhumanists would probably have an answer to my nagging question).
  • I cited artificial intelligence researcher François Chollet (2002) in a previous project after he said, “Human Intelligence Is a Poor Metaphor for What “AI” Is Doing. AI Displays Essentially None of the Properties of Human Cognition, and in Reverse, Most of the Useful Properties of Modern AI Are Not Found in Humans”.
  • With that in mind, we should ask ourselves why we are so determined to make duplicates rather than how we should be doing so.
  • This is particularly puzzling when we consider the cost of consciousness. It slows down cognitive processes considerably and seemingly encourages us, to a greater or lesser degree, to think we are the centre of everything which may, some argue, be a reason for so much anti-adaptive behaviour (Hayles 2017; 26-29, 40-41). Being human is a costly endeavour in terms of energy, but it also seems to make us think things that are, on closer inspection, not always exactly true, or even remotely. In other words, we make things up, smooth over gaps for the sake of verisimilitude (our assumption of it), miss information, and sense only what we need, all to suit the narrative the self constructs for itself out of a multiplicity of interrelating images (some of which we notice, some not), affect and action.
  • Buckner is clear that today’s rational machines are models that simulate human intelligence rather than fully-fledged AGI. But if humans are to be duplicated eventually, rather than merely simulated, he believes we should take a few steps back to make corrections.
  • “We already know that proceeding in the top-heavy, solipsistic tradition can have serious costs: by creating agents with social and moral reasoning systems that are not grounded in affective underpinnings, we create a hollow simulacrum of sociality which, similar to the grounding problems faced by GOFAI [good old-fashioned AI] in other areas of inquiry, is more akin to the reasoning of psychopaths” (2023, 311) (Edit 22/09/24 – surely we are living in a hollow simulacrum of sociality already? Before AI came along??).
  • While reading Buckner, I have a recurring thought that he often may as well be referring to human development rather than machines. Of course, this may be because he’s drawing on a wealth of literature which was historically focused on human development.
  • He also suggests that “where we call DNNs black boxes, it may be better to describe them as unflattering mirrors, uncomfortably reflecting some of our own foibles that we might be reluctant to confront (Ibid; 89). And “… more akin to the reasoning of psychopaths” is a case in point.
  • It prompted me to consider Will Black’s (2021) Psychopathic Cultures and Toxic Empires in which he says, “a relationship, family, workplace, political movement, institution, or media outlet (to list a few examples [and we should add technological entities]), can become warped by and mirror characteristics of psychopaths” (Ibid; 6).
  • Shannon Vallor’s (2024) book, which extends Buckner’s metaphor of the mirror into an entire book’s worth of argument called The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, also compares AI processes to psychopathy but writes, “AI chatbot is not a sociopath—they have no more inner life than an Excel spreadsheet, and thus share none of the sociopath’s impulsivity, ambition, or narcissistic self-regard. But they are built by companies to exploit the same kind of asymmetrical advantage the sociopath does, and they leverage the same predictive talent for anticipating emotional responses that the sociopath enjoys” (Ibid, 147)
  • Edit 24/09/24: I will need to do a post about questions re mirror metaphor seen in Barad, 2007 and Giles, M & Joy, J, 2008
  • It becomes apparent that Sophie de Grouchy’s focus on sympathy, or empathy as we would describe it today (see second bullet point), has much to offer. The fact that de Grouchy is a she is not something to bypass although I am reluctant to make it the focus in any project[4]. That her voice was limited is, without doubt, critical to any research-creation or epistemological archaeology in favour of or against her reading of Smith (both positions exist). But her sex and gender and her focus on empathy make it too easy to fall into the trap of essentialism, which I would want to avoid.
  • Nevertheless, when I was first reading about de Grouchy and then looking through her letters, I was struck and amazed by some of her notions, which foreshadowed with such accuracy several modern writers like Freud, Richard Wrangham and Sarah Blaffer Hrdy for instance. But, of course, she did! Even though she had been edited out, and even though her voice was not afforded the same opportunities as male writers were, she is a definitive node in a diffractive web of Western epistemology.

In the bibliography below there are books and podcasts about de Grouchy along with the other titles I referenced.

Click here to see Tanglegram of connections on a Miro Board (Miro Board Account required to gain access)

Images from various sources of research including: 1. Self Portrait of Sophie de Grouchy 2. Generative image (2022) 3. Diagram from Harlow experiments 4. From an archival film called Psychogenic Disease in Infancy about children suffering from the effects of abandonment. 5 Stimuli used to assess the facial preferences of newborns (Simion et al 2002 in Buckner, 2023) 6. Synthetic skin which may one “day play a role in the invention of androids with humanlike appearances and abilities” (Wang, 2024)

Further links about this work will be added here or in new posts as and when they are published

Bibliography

Black, W. (2022) Psychopathic Cultures and Toxic Empires. 2nd Revised edition (2022). Glasgow: Books Noir Ltd.

Buckner, C. J. (2023) From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. Oxford University Press. At: https://doi.org/10.1093/oso/9780197653302.001.0001 (Accessed 17/06/2024).

François Chollet [@fchollet] (2022) Twitter. Available at: https://twitter.com/fchollet/status/1573752180720312320 (Accessed: 25 September 2022).

Berges, S. (2023) Sophie de Grouchy. Available at: https://plato.stanford.edu/archives/win2023/entries/sophie-de-grouchy/ (Accessed: 2 July 2024).

Ettinger, B. (2006) The matrixial borderspace. Minneapolis: University of Minnesota Press (Theory out of bounds, v. 28).

Giles, M & Joy, J, 2008. Mirrors in the British Iron Age (The Book of Mirrors: An Interdisciplinary Collection Exploring the Cultural History of the Mirror, ed. Anderson, M) Cambridge Scholars Publishing, Newcastle

Golumbia, D., 2019. The great white robot God.

Hayles, N.K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, Ill: University of Chicago Press.

Hayles, N.K. (2017) Unthought: the power of the cognitive nonconscious. Chicago (Ill.): University of Chicago press.

‘Sophie de Grouchy: The most intersting French Revolutionary youve never heard of by Dr. Kathleen McCrudden Illert’ (no date). (The French History Podcast). Available at: https://podcasts.apple.com/gb/podcast/the-french-history-podcast/id1446181140?i=1000555253112 (Accessed: 8 July 2024).

Smith, A. (2022) The Theory of Moral Sentiments. Giten. The Project Gutenberg. Available at: https://www.gutenberg.org/ebooks/67363 (Accessed: 8 July 2024).

Vallor, S. (2024) The AI mirror: how to reclaim our humanity in the age of machine thinking. New York, NY: Oxford University Press.

Wang, C.Q. (2024) Japanese scientists graft ‘living skin’ onto robot face | South China Morning Post. Available at: https://www.scmp.com/video/technology/3270953/japanese-scientists-graft-living-skin-robot-face (Accessed: 20 September 2024).


[1] Secondly, an OU UK based research group called Shifting Power’s stated aim is “to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice.” However, what is very clear in Buckner’s book is that the models he urges technologists to be influenced by are in the main, Western, Rationialists – white, male, European. Locke, Hume, William James: It’s true, he devotes a chapter to Ibn Sina and his writing on memory; and then Sophie de Grouchy who is herself part of the Enlightenment tradition although more materialist, but the overall emphasis is typical – This is hardly surprising as he states at the beginning that he is tying the history of ‘empiricist faculty’ to engineering; as, according to Buckner, engineers intent on building deep learning machines, which are ‘models of the rational mind’. The rational, individual, Cartesian unity has been queried so effectively by post-structuralists, then neo-materialists and queer theorists, but it feels like Buckner, despite his concessions, is unconcerned. And his framing gives credence to the accusation (Western-centric though this accusation might be) that AI is neo-liberal through and through; or lends itself to fantasies outlined in an essay titled  “The Great White Robot God” (2019) by David Golumbia and cited by Mustafa Ali 2024 in a talk titled Apocalyptic AI Otherwise. Despite Buckner’s relatively narrow European focus, there is much worth taking note of. I like his term “anthropofabulation”, a neologism which combines anthropocentricism and confabulation by which he means the way humans view AI with (and much else besides)

[2] Buckner’s interpretation is relatively linear. In addition, I suspect Bracha Ettinger (2006) would argue connections and empathy development are more accurately traced to when an infant is still in the womb according to her matrixial and borderspace framework.

[3] Vallor likens her tone to a Tedtalk and states the danger of handing over our humanity to machines warrants her shift away from a more academic tone: “For many academics, the more restrained and moderate a fellow researcher’s claim, the more likely we are to take it seriously. We see nuance, caution, and heavy qualification as hallmarks of rigorous scholarship, in contrast with the bold, clear messages more easily packaged as a TED Talk or mainstream news feature. Warnings of previously unthinkable harms, or sea changes in our reality, can sell more books and garner more clicks, but the default habit of serious, sober minds is to regard such assertions as hyperbolic and untrustworthy. In research circles this habit has long been seen as a virtue. However sensible this habit might be on a historical scale, as a shield against fearmongers, demagogues, and charlatans of every political and scientific stripe, this is an unfortunate time to be anchored to it. These are not normal times. We are not okay (2024; 161)”

[4] There are too many pitfalls, not least of which is the tendency for society still to see care and empathy as women’s work with reason and judgement, men’s. I was wary of this before committing to focus on de Grouchy and included a link Carol Gilligan’s arguments on my Miro Board which urge us not to head down this route.

Discover more from Sketchbook

Subscribe now to keep reading and get access to the full archive.

Continue reading