Large Language Models like ChatGPT, have shown some amazing capabilities. The critics – like Yann LeCun claim that these models are not the road to Artificial Intelligence and they are nothing more than stochastic parrots only regurgitating patterns they found in the training data (billions of text documents). In this metaphor, LLM have no true understanding of what they are saying and are incapable to create new knowledge – pretty much like a parrot. I do belief that the “stochastic parrot” metaphor for LLMs not only leads to a distorted understanding of LLM functionality but also blinds us to the nuances of human language and thought.
The illusion of explanatory depth
The metaphor leads to an over-simplification of both LLMs and human cognitive processes. It gives the impression that humans have a more profound understanding of these complex systems than we actually do. But do we really understand what we say or is just an illusion?
Humans often overestimate their understanding of various topics, leading to the phenomenon known as the “illusion of explanatory depth.” This illusion can be prevalent in a variety of areas, from everyday objects to abstract concepts and theories. When asked to explain these in detail, we often realize that our understanding is shallower than we initially believed – this is particularly obvious in how the Economy works.
The same principle may apply to our language use. While we construct sentences and use language fluently, we may not fully understand the intricacies of grammar, syntax, or semantic rules that govern language. Our perceived understanding — be it of objects, processes, language, or even our own thoughts — can often be superficial, leading us to mistakenly believe we comprehend more deeply than we actually do.
Confabulation (the creation of false or distorted memories, often without the intention to deceive) is another well reported phenomenon that adds an intriguing layer to our understanding of human cognition. In essence, it’s an unconscious process that attempts to maintain a cohesive and consistent narrative for ourselves.
A misdirection of evaluative effort
Furthermore, the parrot analogy leads us to believe that LLMs merely regurgitate information without any understanding. But in reality, the system can create novel outputs not present in the training data – from a new recipe to new hypothesis to test an idea or experiment. This feature doesn’t align with the notion of a parrot’s mindless repetition.
This false belief about LLMs as “stochastic parrots” might misdirect evaluative efforts. Under the parrot analogy, researchers may overlook the need to investigate how LLMs can generate novel outputs not present in the training data. This reduces the chance to enhance the LLMs’ capabilities or address possible limitations or risks.
Composability and abstractions
Another element that invalidates this metaphor is the composability capability of LLMs Composability refers to the capacity to extract combine concepts to form more complex ones, and it is crucial in human cognition and language. LLMs, having been trained on vast datasets, have observed and learned numerous examples of such composability. Likewise, they can extract abstract concepts, like “sustainability” and “agriculture,” and generate discussions or arguments around “sustainable agriculture.”
LLMs are incredibly proficient at extracting and combining complex features and abstractions from text. This proficiency stems from the underlying architecture and training process of these models, allowing them to discern intricate patterns in the data and map higher-level abstract concepts. For instance, given a discourse about climate change, an LLM could extract core concepts such as greenhouse gases, global warming, carbon footprint, and renewable energy.
Subsequently, these models can combine these concepts to generate insightful summaries, pose potential solutions, or elaborate on the effects and implications in diverse contexts. For example, it could discuss the impact of reducing carbon footprints on slowing global warming, or suggest how renewable energy can contribute to climate change mitigation.
A nice way to text this ideas is asking it to generate a poem on any topic on the style of any public figure or rewrite music lyrics in a completely new context (see examples)
A misdirection of discussion about risks and harms
The parrot metaphor hampers meaningful discussion about the potential risks and harms of LLMs. If we understand these models as parrots, we might not see how they can create outputs that aren’t a part of their training data. As a result, we may underestimate the potential for misuse or other harms.
The claim that LLMs lack “situationally embedded meaning” should not be considered as an absolute restriction of their capabilities or potential risks. Such a notion disregards the empirical facts and the actual performance of these systems, undermining a more thorough and nuanced understanding of their functionalities.
Moreover, there are risks associated with considering LLMs’ lack of situationally embedded meaning as the main determinant of their future performance. There is empirical evidence that shows rapid performance improvement in language models with increasing scale, suggesting that future performance cannot be simply predicted by current characteristics.
While user expectations and understanding of a system are indeed important safety considerations, it is debatable whether the perceived anthropomorphisation of LLMs justifies the effort to limit expressions like “hallucinating”. This again, is an empirical question that should be addressed accordingly.
While the “stochastic parrots” metaphor has initially been helpful in explaining the complex nature of LLMs, it may now be contributing more to misunderstandings and misconceptions.
In conclusion, the “stochastic parrots” metaphor can inadvertently cause confusion and misconceptions about large language models (LLMs), leading to a misdirected evaluation of their capabilities, risks, and harms. Furthermore, the metaphor can give a false sense of confidence about how LLMs and human cognition work, thereby obscuring several basic facts. In conclusion, while the “stochastic parrot” metaphor can help highlight the differences between human cognition and LLMs, it also poses a risk of oversimplification and misunderstanding.
Therefore, it might be beneficial to move beyond this metaphor and shift our focus to a more empirical and detailed exploration of LLMs, their potential, and their limitations.