Chomsky on AI: Part 1
A trip abroad with family brought me to some new but familiar voices speaking on AI.
Greetings from Finland
Bit of a strange week, this. I’m trying to keep up a regular writing routine, just having started one after years of in/consistent in/activity [delete as appropriate], but my usual habits have been thrown into chaos because I’ve come to Finland to see my son, who’s studying games design here for a term on academic exchange. And I’m joined by a fantastic collection of family members, all of whom are brilliant: Uncle Dave, Aunt Lauretta, my brother Robert, my other son, Will.
Having a great time with family isn’t necessarily conducive to good writing habits. (I won’t quite call it a ‘holiday’, though for all intents and purposes… ) But this family is a little strange, so there’s hope.
I’ve not long sat down with Uncle Dave in a Helsinki flat when he’s pouring me wine and quizzing me about AI. We all quiz each other on all sorts of stuff. Mostly politics. Or other stuff - education, theology, hockey - but it always comes back to politics. Because, as we know, everything is political.
So Uncle Dave asks how worried we need to be about AI. As always when he’s asking about stuff on which he isn’t already very knowledgeable, his questions are intelligent, informed, motivated by genuine interest, and probing. We get talking about ChatGPT. Then, having listened to me for probably far too long [he should just read this Substack, right?], he asks:
‘So, when Chomsky talks about ChatGPT as being “Plagiarism Software”, is he right?’
And I’m thinking Oh Christ. He’s pulled out the Big Gun here.
Noam Chomsky: A Family History
My family have a long history with Noam, as we affectionately call him. None of us know him personally; none of us have ever met him. I don’t think. Maybe at a book signing. But back in the late 1980s and 1990s, Noam was the academic voice on the left. Not just my uncle, aunt, brother… lots of people - ok, maybe not ‘lots’ as an adjusted percentage of the population - would travel for hours to see him speak. Like a bunch of Dead Heads trailing around after a very uncool but very smart Jerry Garcia. We compared different talks; we looked for subtle differences in the messaging.
(If you don’t know him, you can do worse than start here. All most of his writings can be found here. And if you search YouTube, you’ll find hours and hours of videos. He remains very prolific.)
And back when I was teaching literature and cultural theory, the famous 1971 debate between Noam and Michel Foucault became for me a particular touchstone - or even a sort of Rosetta Stone, maybe - that I discussed time and again, year after year with students on so many different modules. Chomsky, his crossed leg showing too much sock and a bit of calf, being earnest as ever and completely rational in a way that leaves you nowhere to hide. Foucault, grinning mischievously, the trickster postmodernist feigning ignorance and simplicity: He can’t pretend to have a view on “human nature” as sophisticated as that as Professor Chomsky, he teases. Because Foucault doesn’t believe in ‘human nature’ like Chomsky does; for Foucault, the very idea of ‘human nature’ is just a concept through which power functions, etc…
Here. Have a watch:
I still sometimes imagine Michel and Noam, one sitting on each shoulder, whispering in my ear and egging me on in competing directions, like I’m a Looney Tunes cat debating whether or not I’m going to eat the mouse. (Yeah, the mouse that will escape anyway and hit me in the face with a shovel, drop an anvil on my head and then kick me off a cliff.)
I would say that I leaned closer and closer to Foucault’s view as the years passed, but I definitely started on Noam’s side. But anyway, whose world would you rather live in? If you asked both to remake human civilisation from scratch, surely Noam’s ideas provide a better foundation for society and prescriptions for government.
But my allegiance to Noam has dipped in recent years, for obvious reasons. Now that I’ve stopped playing the Foucault/Chomsky debate to students, the few times I’ve had to dip into Noam’s oeuvre - say, to explain to someone how the government uses media to manufacture consent - it all just seems inadequate. Happy ideas from a different age, before Facebook and Twitter/X, etc., and the broligarchy.
But Noam is still producing stuff, of course, on all the new platforms, and addressing the important issues of the day, so I check in every once in a while. The rest of the family do this regularly, too, to see what Noam says about Palestine, or Trump, or… Ukraine, which is when I stopped checking in.
Obviously Noam’s take on the Russia invasion of Ukraine is demonstrably incorrect on almost every count, which is even more evident since Trump has forced new problems in the region. Chomsky [see what I did there?] sees Russia as acting within reasonable bounds of self-defence; he denies the Ukrainians any sort of agency as a sovereign nation, being stuck in the same ‘spheres of influence’ mentality that Putin and Trump seem to be locked in; he blames the US and the UK for not working harder towards peace…
I get it. The Cold War shaped his thinking. Seeking to rebalance the lack of criticism of American imperialism in the mainstream media means that he instinctively wants to shift blame from the Soviets/Russia to the US. For decades, he was an important voice making those arguments - sometimes the only voice, and certainly the only voice with that level of prominence. But that’s not good enough anymore. It ignores how Russia - whether czarist, Communist or oligarchic - was always an imperial player on the world stage. He’s not the first intelligent human being that’s watched the world pass by and hasn’t kept up. He’s still one of the good guys, even if he’s not as relevant anymore.
Noam on AI
But hearing that Chomsky has weighed in on the question of AI certainly grabbed my attention. So I’ve been doing some research. And here are some initial findings:
Chomsky has given several interviews in which he has been asked about AI. I cite a few of them here, though this one—Noam Chomsky Speaks on What ChatGPT Is Really Good For, an interview conducted by C.J. Polychroniou in May 2023—seems to be one of the more sustained written commentaries I can find.
The short version, so far, is just that Chomsky isn’t all that worried about AI because he’s not at all convinced it’s that great a thing. He claims that AI doesn’t have the capabilities of a 2 year old human child. Noam calls ChaptGPT, as my Uncle Dave said, ‘sophisticaed high-tech plagiarism’ and reminds us that Turing’s paper was called ‘The Imitation Game’, that machines are only uncreatively, badly, mimicking human language, and that the appearance of doing something isn’t the same as actually doing it.
Chomsky distinguishes - imperfectly, he accepts - between ‘pure engineering’ and ‘science’. Chomsky laments that AI now is understood as a question of engineering, that it is trying to produce something that we can use, while AI’s founders - Alan Turing, Herbert Simon, Marvin Minsky, etc. - thought of research into AI as something closer to the cognitive sciences that sought to advance human understanding.
It’s a clumsy definition, but I can see the point he’s making, and we should probably not dismiss it, especially as AI is being co-opted by Big Tech to do their bidding. I know this doesn’t mean that others aren’t using AI for ‘more pure’ scientific research, but we absolutely need to ask the question not only what is AI? but also what or who is AI for? and maybe then we’ll not be so happy with Musk and Zuckerberg charging forward with their vision of what AI should be without putting more limits on what they want to do.
If the term ‘radical humanist’ wasn’t inherently funny, we might bestow it upon Chomsky (cf. the debate with Foucault, above). So it’s a nice surprise to see Chomsky adopting a posthumanist positions here. He accepts not only that AI will surpass human capability but understands that it has already done so. But he also completely normalises this, explaining that ‘There is no Great Chain of Being with humans at the top,’ while pointing out that the small-brained ants in his backyard are already capable of things that human beings can’t do.
But ultimately, Chomsky falls back on his humanist foundations. He’s too smart to present the usual arguments that AI can never be as smart as humans because it will always lack that nebulous, metaphysical something, which proponents of this view usually ascribe to ‘spirit’ or ‘consciousness’. But it’s there. Chomsky’s version is some idea of the human brain and its infinite capacity for creativity. Hey. He came of (intellectual) age in the 1960s. They all thought like this.
For example, he says,
But even the most massive collection of data is necessarily misleading in crucial ways. It keeps to what is normally produced, not the knowledge of the language coded in the brain
And he says
Each species is unique, and humans are the uniquest of all. If we are interested in understanding what kind of creatures we are–following the injunction of the Delphic Oracle 2,500 years ago–we will be primarily concerned with what makes humans the uniquest of all, primarily language and thought, closely intertwined, as recognized in a rich tradition going back to classical Greece and India. Most behavior is fairly routine, hence to some extent predictable. What provides real insight into what makes us unique is what is not routine, which we do find, sometimes by experiment, sometimes by observation, from normal children to great artists and scientists.
His faith in science as a unique endeavour, like his belief in human nature in the debate with Foucault, also betrays some of this humanism from which he cannot unshackle himself. I say ‘unshackle’ not because it’s a fatal trap, necessarily, but because this humanism will, inevitably, always tie him down to a certain limited position when he is trying to wrestle with the question of AI.
Chomsky understands that what appears to be a ‘thinking’ machine is actually just an illusion. I like his analogy that a computer can no more be said to be ‘thinking’ than a submarine can be said to be ‘swimming’. If you want to call it that, fine, but that’s not what the submarine is doing. What we call ‘AI’, Chomsky correctly identifies, is just a machine trolling through massive amounts of data and finding enough statistical regularities so that it can make a reasonable guess at what the next word will be.
So thinking machines aren’t that special because they’re not really thinking, he decides. Which is one way to look at it. The other possibility, of course, is that what humanists imagine ‘thinking’ involves isn’t all that special either. But that’s anathema to humanists and it’s that sort of suggestion that gives us the nightmares that is the subject of so many scary movies. I’ll have to address this another time.
There’s a bit of strawmannery going on here, too, with Chomsky warning us that AI might be wrong, or that AI might contradict what science tells us is right or that AI might contradict our own common sense. Chomsky thinks that AI is dangerous, but only because people will take it seriously. This much is probably true. We shouldn’t ask AI questions, and we shouldn’t listen to it when it tells us things.
AI will be a powerful tool, Chomsky correctly says, too, in spreading misinformation and disinformation, including helping those whose messaging seeks to undermine the noble work of science. All of which puts Chomsky on his familiar footing and critique. But the petition he sends readers to sign calling for a pause AI training and experiments is more worried about AI as a general threat to human civilisation, not as threat to the spread of misinformation - unsurprisingly, because this is the petition signed by Elon Musk, who clearly does not mind the spread of misinformation in the slightest.
Chomsky also misses something on the question of AI ‘morality’. He says,
Unless carefully controlled, AI engineering can pose severe threats. Suppose, for example, that care of patients was automated. The inevitable errors that would be overcome by human judgment could produce a horror story.
Now, this strikes me as a bit odd, that a man who has spent now 7 decades describing in forensic detail the moral failings of human beings should now have so much faith in human beings to have a superior moral capacity over machines. As we like to say, natural stupidity is no competition for artificial intelligence. Or vice versa. Perhaps more worryingly, some of the most egregious ‘moral’ failings of human beings (and we know they can only be problematically considered so, thanks to the likes of Foucault) have been committed at precisely those times when human beings thought they were acting in completely rational ways, i.e. most like machines.
Chomsky, we have to remember, began and made his career as a linguist, and a very particular kind of linguist, one who emphasised natural capacities and creativity, and these concepts will always change the angle at which he approaches the question of AI. AI, he says, will never teach us anything about language, learning, intelligence or thought. It’s ‘glorified autofill’, which isn’t entirely wrong, but doesn’t really address the full picture, either.
The next series of questions, something more in-depth that I’ll try to get to in a future post, would look at how Chomsky’s views on AI are influenced by his ideas on language, and whether his views on AI can survive critiques on his linguistics, and the relationship between the two. But I’ll have to talk to some people, first, who are wiser and better informed on Chomsky’s linguistics. And I’ll think more about his politics, too, to see how the whole puzzle fits together.
If you can help, or know someone who can, please comment!
or share this with some that you think might be able to contribute.