Google’s powerful AI sheds light on a human cognitive problem
Getty Images
When you read a sentence like this, your past experience tells you that it is written by a human being who thinks and feels. And, in this case, there is indeed a human who types these words: [Hi, there!] But these days, some sentences that sound remarkably human are actually generated by artificial intelligence systems trained on massive amounts of human text.
People are so used to assuming that common language comes from a human thought, from a human feeling that it can be difficult to understand evidence to the contrary. How are people likely to navigate this relatively uncharted territory? Due to a persistent tendency to associate fluent expression with fluid thought, it is natural, but potentially misleading, to think that if an AI model can express itself fluently, that means it thinks and feels exactly. like humans.
Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system, LaMDA, has a sense of self because it can eloquently generate text about its purported feelings. This event and subsequent media coverage leads to a Number rightly skeptical articles and posts about the claim that computer models of human language are sentient, that is, able to think, feel and experience.
The question of what it would mean for an AI model to be sentient is complicated (see, for example, the take of our colleague), and our goal here is not to fix it. But like Language researcherswe can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, aware, or intelligent.
Using AI to generate human language
Text generated by models like Google’s LaMDA can be difficult to distinguish from human-written text. This impressive achievement is the result of a decades-long program to create patterns that generate meaningful grammatical language.

Early versions dating back to at least the 1950s, known as n-gram models, simply counted occurrences of specific phrases and used them to guess which words were likely to appear in particular contexts. For example, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapple”. If you have enough English text, you’ll see the phrase “peanut butter and jelly” over and over again, but you may never see the phrase “peanut butter and pineapple.”
Today’s models, datasets, and rules that approximate human language differ from those early attempts in several important ways. First of all, they are trained all over the internet. Second, they can learn relationships between distant words, not just neighboring words. Third, they are regulated by a large number of internal “knobs” – so many that it is difficult even for the engineers designing them to understand why they generate one sequence of words rather than another.
The task of the models, however, remains the same as in the 1950s: to determine which word is likely to come next. Today, they’re so good at it that almost every sentence they generate seems fluent and grammatical.
Comments are closed.