AI Agent Hype Spiral:
When Words Take on
New Meaning


Words matter. Etymology isn’t just for linguists; it’s a useful roadmap for understanding how language mutates over time.
Buzzwords, Meet the Hype Cycle
In the field of AI, the pace of that mutation is on overdrive. Semantic change, the way words shift in meaning due to cultural forces or technological progress, has turned ‘AI agent’ from a once-stable term into a marketing free-for-all.
Some terms, like ‘generative AI’, have settled into a clear definition. Others, like ‘agent’, have been stretched, contorted, and overloaded to a point of ambiguity. Once a straightforward label for rule-following software, ‘agent’ is now used interchangeably to describe everything from chatbots to self-directed AI with near-autonomous decision-making.
Why This Matters
Language shapes reality. ‘AI agent’ once meant a deterministic software program and now implies something vastly more complex. Where does that leave the policymakers, businesses, and researchers who rely on precise definitions to shape AI’s future?
This article cuts through the semantic chaos. We’ll trace how ‘agent’ evolved from its rigid early meaning to today’s much more fluid (and contested) usage, examine why this linguistic drift is fuelling change, and explore whether AI’s rapid acceleration is outpacing our ability to define it.
Let’s rewind to where the chaos began. A logical starting point is when consensus on the word ‘agent’ started to diverge. The answer you often hear is that this coincided with the rapid advancement of large language models (LLMs).
While this is part of the story, it’s likely not the whole picture. By tracing the origins and evolution of AI terminology, we can cut away the LLM hype, and we may find a clearer, more consistent understanding of their meaning.


The elephant in History’s room
As many of us know, AI has evolved through three distinct waves. Within these waves, there are important markers that help us understand why the meaning of the word ‘agent’ has shifted.
Rule-Based Symbolic AI
The first wave of AI, from the 1950s to the 1980s, saw foundational research by scientists like Alan Turing and John McCarthy, which brought the concept of artificial intelligence into the scientific community.
This was the era when terms like ‘artificial intelligence’ and ‘agent’ took hold, as researchers such as Albert Bandura and Rodney Brooks were grappling with complex theories of learning and intelligence, including agency and contextual awareness. These areas of research laid the groundwork and terminologies for the intelligent agents of the future.
A key observation from the 1960s is that ELIZA, one of the earliest chat-style AI programs, was not initially called a chatbot. The term used to describe ELIZA and similar systems at the time was ‘natural language processing’ (NLP).
Statistical and Machine Learning AI
The second wave of AI, from 1990 to the 2010s, shifted towards machine learning over deterministic rules, enabling smarter, more adaptive models. AI agents weren’t only becoming more capable, they were learning to collaborate, giving rise to terms like multi-agent systems (MAS).
At the same time, decision-making and ‘autonomous AI’ took a step forward with models like belief-desire-intention (BDI), where agents began mimicking human-like reasoning. The term ‘chatbot’ was first used to describe a more advanced NLP program named ALICE in the 1990’s.
The Era of Agents
We’re living in AI’s third wave. It kicked off in the 2010s and has more recently been referred to as the ‘Era of Agents’. A material leap in AI investment has forced governments, companies, and researchers to take notice, fuelling a global conversation on what comes next.
Unlike the second wave, where agents simply collaborated, today’s AI is pushing beyond traditional multi-agent systems (MAS) into paradigm known as ‘agentic AI’, which can include multi-agent systems. Within this new paradigm, agents act with increasing contextual awareness and agency.
The meaning of the term ‘autonomous AI’ has become more ambiguous, largely due to growing scrutiny from governments, enterprises, and researchers regarding the risks associated with AI’s increasing power. The more powerful AI grew, the greater the risk of losing control over it.
Meanwhile, modern chatbots have undergone an identity shift. Take ChatGPT: by history’s definition, a chatbot, yet rarely labeled as one. Instead, ChatGPT and other similar models have moved into the broader, more sophisticated category of generative AI.
Are LLMs truly responsible for blurring the meaning of ‘agent,’ or is the real culprit the wave of companies branding ‘agent’ and ‘agentic’ with their own unique spins? Perhaps it’s the term’s longevity, spanning all three AI waves over eight decades that has caused its meaning to shift. Perhaps it’s just culture.
The reality is that it’s likely a combination of all these factors and more. It’s possible that AI’s own history created the elephant, or there is no elephant at all, depending on how you view it. (Schrödinger’s elephant?)



Consensus, Commercialisation and Culture
Exploring the history of AI capabilities and terminology reveals that certain terms evolve in distinct usage patterns over time.
The table below compares terms across the three waves. Starting with clearer, more universally accepted meanings, such as ‘chatbot’ and ‘generative AI,’ to those that remain more ambiguous, like ‘agent’ and ‘autonomous’.
There are some interesting observations when you compare these terms. The longevity of a term appears to play a role in its understanding. The associations and relationships of terms may confuse their distinct meanings and increased cultural and commercial use, also appear to shift the consensus on a term’s meaning.

Longevity & Prominence
Looking at the difference between the more universally accepted terms ‘chatbot’ and ‘generative AI,’ their prevalence in use has either been recent or relatively short-lived.
Shorter: The term ‘generative AI’ is more recent in its prevalence, while ‘chatbot’ was most prominent during the second wave but has since been overshadowed by more contemporary terms like ‘generative AI.’
Longer: In contrast, ‘agent’ and ‘autonomous’ have been used across all three waves, gaining popularity with each new wave.
In comparison to the table above, the Diagram below puts more focus on not only the longevity but also the prevalent usage of the words


Lexical Association
Few things illustrate the shifting of AI terminology as much as the recent interplay between the terms ‘autonomous AI’, and ‘agency’. As AI agents advanced, the concept of agency took centre stage, and ‘autonomous AI’ was its support.
Autonomous AI wasn’t just flexing independence, it was compounding into something more. AI wasn’t only executing tasks; it was setting goals, making decisions, adapting, and behaving with a degree of intent. By definition, AI had agency. Autonomy had become just one aspect of agency.
Commercialisation & Culture
The widespread use of the term ‘AI’ and related terms have been propelled into the mainstream in wave three. AI, once confined to academic and technology communities, is now embraced by a much broader audience.
With the cultural and commercial increase in use of the term ‘agent’ comes new challenges such as confirmation bias. People often describe AI in a way that they understand and that benefits them.
The effect of this collective societal bias pulls the definition in different directions, and people simply become confused by its meaning. The scrutiny of the rapidly advancing capabilities of AI has also made us question its autonomy. In the first 2 waves AI’s automation capability was seen as positive, but as the capability increased it hit a tipping point where the risk started to outweigh the advancement.
Consensus
As agents grew in sophistication, opinions over what uniquely qualifies as an agent became more rigid. For some, if a system didn’t use emerging patterns like reflection, planning, or multi-agent collaboration, could it still be considered an agent? This shift, at its core, is an example of semantic change in action.
Andrew Ng, who popularised the term ‘agentic AI’ in 2024, emphasized that agents can exist at various levels of sophistication. While new agentic patterns have emerged, he cautioned against rigidly defining the exact degree to which a system must implement them.
Given the longevity and prevalence of agent terminology, articulating the meaning of the word through the waves is perhaps the clearest way to see its evolution.


Collective societal bias pulls the definition in different directions, and people simply become confused by its meaning.


Defining Meaning without Consensus
As AI continues to evolve, so too does the language we use to define it. The term ‘agent’ has spanned decades and multiple AI waves, accumulating materially different meanings along the way. But does the current lack of a universally agreed-upon definition present a problem, or is it simply a reflection of AI’s dynamic and ever-changing nature?
On one hand, a lack of clear terminology can create confusion – especially in business, government, and research settings, where precise definitions are crucial. Organisations developing AI strategies must navigate an increasingly complex landscape, where terms like ‘autonomous AI’ and ‘agentic AI’ are used interchangeably despite having distinct implications and risks. This lack of a shared understanding can slow regulatory efforts and hinder collaboration.
On the other hand, this fluidity is beneficial. Agents have moved beyond rule-based automation into systems with greater agency, contextual awareness, and adaptability. The evolving terminology not only reflects this shift but also accelerates progress, as terms take on more advanced and contemporary meanings.
What comes next?
Rather than seeking rigid definitions, the AI community benefits from embracing a more flexible approach. One that acknowledges the historical progression of these terms, while staying open to change. Technologists, policymakers, and researchers must work together to refine terminology in a way that supports clarity without stifling progress.
AI's continued evolution and adoption into mainstream society means that definitions will inevitably shift over time, and into the next wave. The answer is not in agreeing on a single definition of the term ‘agent’, but in understanding the implications of AI’s increasing agency, adaptability, and impact, and ensuring that we as individuals, businesses and governments are best prepared for what comes next.