Silicon Valley is bullish on AI agents. Openai CEO Sam Altman said agents will “join the workforce” this year. Microsoft CEO Satya Nadella predicted that agents would replace certain knowledge tasks. Salesforce CEO Marc Benioff said Salesforce’s goal is to become “the number one provider of digital labor worldwide” through the company’s various “agent” services.
However, no one seems to agree on what exactly the AI agent is.
Over the past few years, the tech industry has boldly declared that AI “agents” (the latest buzzword) will change everything. Just as AI chatbots like Openai’s ChatGPT have given us new ways to express information, agents fundamentally change the way we approach our work, claiming CEOs like Altman and Nadella.
That may be true. But it also depends on how you define “agents”, which is not an easy task. Like other AI-related terms (“multimodal”, “AGI”, “AI” itself), the terms “agent” and “agent” are diluted to pointless points.
This threatens to leave Openai, Microsoft, Salesforce, Amazon, Google and countless other companies that build an entire lineup of products around agents in nasty places. Amazon’s agents are not the same as agents from Google or other vendors, leading to confusion and customer dissatisfaction.
Ryan Salva, senior director of Google’s products and former leader of Github Copilot, said he “hate” the word “agent.”
“I think our industry is overloading the term “agent” to the point of making it almost pointless,” Salva told TechCrunch in an interview. “[It is] One of my pet pee. ”
The agent-defined dilemma is nothing new. In last year’s work, former TechCrunch reporter Ron Miller said, “What is an AI agent?” The problem he identified is that almost every company’s construction agents approach technology in a different way.
This is a problem that has recently gotten worse.
This week, Openai published a blog post defining agents as “an automated system that can accomplish tasks independently on behalf of users.” However, in the same week, the company released a developer document defining the agent as “LLMS equipped with instructions and tools.”
Openai’s API product marketing lead, Leher Pathak, later in a post on X, said he understood that the terms “assistant” and “agent” were interchangeable.
Meanwhile, Microsoft’s blog is trying to distinguish between agents and AI assistants. The former, which Microsoft calls the “new app” in the “AI-powered world,” can be tailored to have specific expertise, but assistants simply help with common tasks such as drafting emails.
AI Lab Humanity deals with agent-defined Hodge Podge a little more directly. In a blog post, humanity states that agents can be defined in several ways, including both “a fully autonomous system that operates independently over a long period of time” and “normative implementation following a predefined workflow.”
Salesforce has perhaps the broadest definition of AI “agents.” According to the software giant, the agent is “type type” […] A system that allows customers to understand and respond to inquiries without human intervention. “The company’s website lists six different categories, ranging from “simple reflective agents” to “utility-based agents.”
So why is it confusion?
Well, agents – like AI – are ambiguous and they are constantly evolving. Openai, Google and Perplexity have begun shipping what they consider to be the first agents, Google’s Project Mariner and Perplexity shopping agents, which are Openai operators.
Rich Villars, IDC’s global research GVP, noted that technology companies have a “long history” of not strictly following technical definitions.
“They care more on a technical level about what they are trying to achieve,” Villers told TechCrunch. “Especially in a rapidly evolving market.”
However, according to Andrew Ng, founder of AI Learning Platform DeepLearning.ai, marketing is largely responsible too.
“The concepts of AI ‘agent’ and ‘agent’ workflows used to have technical implications,” Ng said in a recent interview.
The lack of a unified definition of agents is an opportunity and a challenge, says Jim Rowan, head of AI at Deloitte. On the one hand, ambiguity allows for flexibility, allowing businesses to customize agents to suit their needs. On the other hand, this can lead to “false expectations” and difficulties when measuring value and ROI from an agent project.
“Without a standardized definition, at least within an organization, it’s difficult to benchmark performance and ensure consistent results,” Rowan said. “This can result in a variety of interpretations of what AI agents should provide, and complicate the goals and outcomes of the project. Ultimately, flexibility can promote creative solutions, but a more standardized understanding will help companies better navigate the AI agents’ landscape and maximize their investment.”
Unfortunately, if the elucidation of the term “ai” is any indication, it appears unlikely that the industry will soon merge one definition of “agent” any time soon.
Source link