# o3-deep-research-on-agents Great, I’ll put together a broad overview of agents and agentic workflows in modern generative AI. This will include definitions and distinctions, a history of how these concepts emerged, and a survey of current practices and future directions—especially as they relate to education, research, and broader knowledge work. I’ll also integrate relevant technical references and examples (like Crew AI, LangGraph, and OpenAI agents) to ground the discussion. I’ll let you know as soon as the report is ready for your review. # Agents and Agentic Workflows in Generative AI ## Introduction Artificial intelligence is rapidly moving beyond single-turn interactions into systems that can act autonomously on our behalf. **AI “agents”** – programs that perceive their environment and take actions to achieve goals – and **agentic workflows** – orchestrations of multiple agents or tools working together – have become buzzwords in modern generative AI development ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=When%20we%20talk%20about%20AI,is%20still%20coming%20into%20focus)). While definitions vary, there is broad agreement that *agentic AI* represents a step-change from merely generating content to **autonomously solving problems** for users ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=Caveats%20aside%2C%20agentic%20AI%20refers,problems%20on%20a%20user%E2%80%99s%20behalf)). This report provides a structured overview of these concepts, their evolution, and their implications for the future of education, research, and knowledge work. ## Defining Agents vs. Agentic Workflows ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/)) *Figure: Conceptual diagram of an agentic AI system. An AI agent uses memory, tools, and goals to observe and act within an environment, planning actions to achieve objectives.* In AI, an **agent** traditionally means *“anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators”* ([Norvig's Agent Definition](https://www.linkedin.com/pulse/norvigs-agent-definition-matt-rickard-msmcf#:~:text=In%201995%2C%20Stuart%20J,Artificial%20Intelligence%3A%20A%20Modern%20Approach)). In other words, an agent observes the world (physical or digital) and makes decisions to change its state. Classic AI definitions emphasize properties like **autonomy, reactivity, proactiveness,** and (in multi-agent contexts) **social ability** ([Intelligent Agents: Exploring Definitions and Bridging Classical and Modern Views | by Makbule Gulcin Ozsoy | Medium](https://medium.com/@makbule.ozsoy_73232/intelligent-agents-exploring-definitions-and-bridging-classical-and-modern-views-b1a97a1514e2#:~:text=Agents%2C%20under%20the%20weak%20definition%2C,directed%20behaviour)) – the agent operates without constant human direction, responds to changes, pursues goals, and can interact or cooperate with other agents. Today’s generative AI agents typically use large language models (LLMs) as their “brains,” allowing them to interpret natural language, reason about tasks, and execute plans. For example, an AI agent might take a high-level goal (“plan my travel itinerary”) and then decide on sub-tasks, like querying flight options or booking hotels, with minimal further input. By contrast, an **agentic workflow** (or *agentic system*) refers to a more complex orchestration in which **multiple agents and tools collaborate** (often with a human in the loop at key points) to accomplish a broader task. In industry usage, *“agentic AI”* implies an ecosystem of specialized agents coordinating their efforts autonomously ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=When%20multiple%20AI%20agents%20work,to%20solve%20customer%20issues%20efficiently)) ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=Agentic%20AI%20refers%20to%20artificial,adapting%20capabilities%2C%20and%20advanced%20reasoning)). One can think of agentic workflows as the *“big picture”* arrangement: instead of a single AI agent handling everything, you have a *team* of agents (and possibly traditional software services) working in concert. A simple analogy is that a single agent is like a skilled worker, whereas an agentic system is like a well-coordinated team or assembly line. This distinction is highlighted by recent explanations from experts. For instance, Google’s AI division and others note that *“AI agents”* are building blocks – individual services that can collaborate on tasks – while *“agentic AI”* describes the sophisticated **framework uniting multiple agents** to achieve larger goals ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=From%20AI%20agent%20to%20agentic,AI)) ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=Unlike%20AI%20agents%2C%20which%20utilize,based%20on%20experience%20and%20feedback)). In a customer service scenario, one agent might interpret a user’s request, a second agent might search a knowledge base, and a third might execute an account update; together, these form an agentic workflow solving the user’s problem end-to-end ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=When%20multiple%20AI%20agents%20work,to%20solve%20customer%20issues%20efficiently)). IBM similarly defines *AI agent orchestration* as *“coordinating multiple specialized AI agents within a unified system to efficiently achieve shared objectives,”* as opposed to relying on a single monolithic AI ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=Artificial%20intelligence%20,to%20efficiently%20achieve%20shared%20objectives)) ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=AI%20assistants%20exist%20on%20a,This%20is%20agentic%20AI)). Another way to frame the difference is by *flexibility and autonomy*. A fixed software **workflow** (without agentic AI) follows a predetermined script of steps. An **agent** can deviate from a script – it can plan, make decisions, and react to unexpected inputs. And an **agentic workflow** uses that flexibility at scale: it involves **dynamic decision-making across multiple steps and components**, not just a linear chain. In tool-building terms, a standard program or even a simple AI chatbot will do exactly and only what it’s explicitly told, whereas an agentic system can *figure out what needs to be done* to reach a goal, even if the exact sequence of actions wasn’t pre-programmed. For example, the LangChain library differentiates a “chain” (a linear sequence of calls) from an “agent” that can **decide which tool or action to take next based on the situation**, allowing non-linear behavior ([AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain | by Sandi Besen | TDS Archive | Medium](https://medium.com/data-science/ai-agent-workflows-a-complete-guide-on-whether-to-build-with-langgraph-or-langchain-117025509fa0#:~:text=LangChain)). It’s important to note that agentic systems often still include humans *somewhere* in the loop – for oversight or final approval – especially in high-stakes domains. “Autonomous” doesn’t necessarily mean the AI is completely unchecked; rather, it refers to the AI’s ability to carry out extended sequences of actions on its own initiative. In practice, many agentic workflows are *hybrid*: AI agents handle the heavy lifting, while humans set goals and review critical decisions. ## Historical Evolution of AI Agency The quest for *agency* in AI – endowing machines with the capacity to act independently and purposefully – has deep roots. Early AI researchers in the 1960s and 70s built some of the first agents in the form of **robotics and planning systems**. A landmark was **Shakey the Robot**, developed at SRI from 1966–1972, which is often cited as the first general-purpose mobile robot able to perceive and reason about its actions. ([image]()) *Figure: Shakey (1960s), an early autonomous agent. It could perceive its surroundings, break down commands into sub-tasks, and navigate and manipulate objects – a groundbreaking demonstration of an AI system “seeing,” “thinking,” and “acting” ([Leo Rover Blog - What was the world’s first mobile intelligent robot?](https://www.leorover.tech/post/what-was-the-worlds-first-mobile-intelligent-robot#:~:text=Shakey%20was%20the%20first%20mobile,a%20course%20to%20avoid%20obstacles)).* Shakey’s software could plan routes, push objects, and even communicate simple English, all without step-by-step human control ([Leo Rover Blog - What was the world’s first mobile intelligent robot?](https://www.leorover.tech/post/what-was-the-worlds-first-mobile-intelligent-robot#:~:text=Shakey%20was%20the%20first%20mobile,a%20course%20to%20avoid%20obstacles)). This showed that machines could integrate **perception, cognition, and action**, embodying a primitive form of agency. Throughout the 1980s and 90s, the field of **autonomous agents and multi-agent systems (MAS)** took shape. Researchers like Michael Bratman (philosopher) and computer scientists Anand Rao and Michael Georgeff developed the **Belief-Desire-Intention (BDI) model**, providing a theoretical framework for agents that reason about their beliefs, goals, and planned actions. The BDI paradigm and similar cognitive architectures treated agents almost like rational actors, with internal states corresponding to knowledge and objectives. At the same time, terms like *“intelligent agents”* and *“software agents”* became popular to describe programs that could perform tasks for users (such as personal assistants, recommendation agents, or automated trading systems). Key properties were identified, building on earlier definitions: for example, one influential set of criteria (Wooldridge’s *weak agent* definition) required **autonomy, social ability, reactivity, and proactiveness** ([Intelligent Agents: Exploring Definitions and Bridging Classical and Modern Views | by Makbule Gulcin Ozsoy | Medium](https://medium.com/@makbule.ozsoy_73232/intelligent-agents-exploring-definitions-and-bridging-classical-and-modern-views-b1a97a1514e2#:~:text=Agents%2C%20under%20the%20weak%20definition%2C,directed%20behaviour)). In practice, many 90s-era “agents” were relatively simple by today’s standards – often rule-based or expert systems operating in constrained environments – but the conceptual groundwork for agent-based computing was laid. The late 1990s also saw popular culture embrace the idea of software agents (think of Microsoft’s *Clippy*, the paperclip “office assistant”, or early web “bots”), though these were mostly scripted and far from truly autonomous. More serious developments came in **multi-agent systems** research, which explored how teams of agents could coordinate or negotiate. For instance, distributed AI researchers studied **communication protocols** for agents (like the Contract Net Protocol for task allocation) and how agents could collaborate or compete in simulations. The notion of *agentic workflows* has antecedents here: any complex process handled via **multiple interacting agents** (with or without humans) can be seen as an early agentic system. However, due to limited AI capabilities at the time, such systems were usually limited to narrow domains (industrial control systems, logistics optimizers, war-game simulators, etc.). The **autonomy** was often heavily circumscribed by human-designed rules. A significant shift occurred in the 2000s–2010s as machine learning, especially **reinforcement learning (RL)**, became effective. RL allowed agents to *learn* optimal actions via trial and error, rather than relying solely on predefined rules. This era produced agents that could exceed human performance in certain tasks: for example, **game-playing agents**. In 2016, DeepMind’s **AlphaGo** famously combined neural networks with tree-search planning to beat a world champion at Go – a feat considered a decade ahead of its time. AlphaGo and its successors (AlphaZero, etc.) were agents in the sense that they made decisions (moves in a game) to achieve a goal (winning), with *strategic autonomy* during play. Around the same time, OpenAI’s experiments with multi-agent hide-and-seek in simulated environments showed agents developing unexpected strategies when they co-evolved, demonstrating *emergent* behaviors when multiple agents interact. These milestones reflected a conceptual shift: instead of viewing agents as just explicit procedural programs, they could be **learning systems** that develop their own policies and even cooperate or compete in unscripted ways. The success in games hinted that more general autonomous problem-solving might be within reach. Yet, until recently, most AI agents were specialized (a robot in a lab, a program mastering a board game, etc.). They lacked the **general communication and reasoning ability** needed for open-ended tasks. This is where modern generative AI changed the landscape. The advent of powerful **LLMs (like GPT-style models)** gave us machines with a broad *understanding* of language, which surprisingly also provides a form of general world knowledge and reasoning competency. This in turn enabled a new breed of agent: one that can reason abstractly, converse, and plan in natural language. Researchers realized that an LLM could serve as a kind of “general-purpose cognitive engine” for agents – planning steps, writing and reading text, even dynamically generating code or queries to use tools. In effect, language became the universal interface for agentic activity ([AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain | by Sandi Besen | TDS Archive | Medium](https://medium.com/data-science/ai-agent-workflows-a-complete-guide-on-whether-to-build-with-langgraph-or-langchain-117025509fa0#:~:text=Language%20models%20have%20unlocked%20possibilities,other%20%E2%80%94%20through%20natural%20language)). An AI agent could now use English (or any human language) to decide what to do next, ask for clarification, or output results, making it far more flexible and generally applicable. Crucially, a 2022 paper from Google/Princeton introduced the **ReAct framework** (Reasoning and Acting in language models), which demonstrated how an LLM can intermix logical reasoning with actions in an *interleaved* loop ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=,named%20ReAct%2C%20to%20a%20diverse)). In ReAct, the model produces a chain-of-thought explaining its reasoning, decides on an action (like calling a tool or querying a database), observes the result, then continues reasoning – all within one coherent prompt trajectory. This synergy of *“think a bit, then act, then think more, then act…”* was a conceptual breakthrough in building agents with language models ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=topics,Fever%29%2C%20ReAct%20overcomes)). It showed that given the right prompts, an AI could plan and execute complex sequences without a rigid script, reducing errors (like hallucinations) by checking facts via tools ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=set%20of%20language%20and%20decision,ALFWorld%20and)). Around the same time, other work like Meta’s **Toolformer** (which enabled models to call APIs) and Microsoft’s **Jarvis (HuggingGPT)** (which had ChatGPT orchestrate calls to other AI models) reinforced the trend: large models could be the **decision-making core of agentic systems**, controlling tools and even other models via natural language. These research advances set the stage for the explosion of interest in agentic AI in 2023 and beyond. We began to see not just academic demos but *community-driven projects* that captured the imagination: notably **AutoGPT** and **BabyAGI** in early 2023. These were open-source experiments that looped GPT-4 outputs to create a kind of autonomous task rabbit. For example, **AutoGPT** (by Toran Bruce Richards) was described as *“an experimental open-source attempt to make GPT-4 fully autonomous”* – it can take a high-level goal, break it into subtasks, and recursively prompt itself to complete a multi-step project ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=AutoGPT%20is%20an%20open,3.5)) ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=What%20are%20AI%20agents%3F)). Similarly, **BabyAGI** (by Yohei Nakajima) tried to create an AI that would spawn new tasks as it completed others, aiming to imitate an endlessly proactive research assistant. These projects were rudimentary at first, but they demonstrated in tangible form what an agent with memory, goal-setting, and tool-use might look like. They also introduced many developers to the concept of *self-prompting AI*: systems where the AI’s own outputs become future inputs (allowing iteration towards a goal). The excitement around AutoGPT and BabyAGI made “AI agent” a household term in tech circles. In parallel, major tech companies started explicitly framing their AI offerings in terms of agents and agentic workflows. By 2024, **OpenAI**, **Google**, **Microsoft**, **IBM** and others were all talking about agents: - **OpenAI** began integrating tools and launching an *“Agents SDK”* (an evolution of a research project called “Swarm”) to help developers “orchestrate agentic workflows” ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=The%20open%20source%20Agents%20software,enterprises%20have%20already%20adopted%20it)). This signaled that even OpenAI – known for large generic models – sees value in agent frameworks where multiple specialized components cooperate ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=Andersen%20said%20the%20SDK%20is,agent%20collaboration%20support)). OpenAI’s SDK introduced features like agent handoffs (one agent delegating to another), better debugging, and safety guardrails for autonomous agents ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=The%20open%20source%20Agents%20software,enterprises%20have%20already%20adopted%20it)). In late 2024, OpenAI also previewed *“Operator”* (a so-called *Computer-Using Agent*), an experimental agent that can operate a web browser to perform tasks for the user ([Setting a Context for Agentic AI in Higher Ed - UPCEA](https://upcea.edu/setting-a-context-for-agentic-ai-in-higher-ed/#:~:text=On%20January%2023%2C%202025%2C%C2%A0OpenAI%20released,broadly%20the%20collective%20knowledge%20and)). Sam Altman (OpenAI’s CEO) even outlined a **five-level roadmap** for AI, in which Level 3 is “Agents – systems that can take actions” coming after chatbots and reasoners ([Setting a Context for Agentic AI in Higher Ed - UPCEA](https://upcea.edu/setting-a-context-for-agentic-ai-in-higher-ed/#:~:text=,ASI)). Level 4 and 5 (beyond agents) would approach Artificial General Intelligence, illustrating how agents are viewed as a critical stepping stone. - **Google** meanwhile emphasized *agentic systems* in its AI platforms. At Google Cloud Next 2025, it introduced an **Agent Development Kit (ADK)** and the **Agent2Agent (A2A) protocol** for inter-agent communication ([Google Cloud Next 2025: Agentic AI Stack, Multimodality, And Sovereignty](https://www.forrester.com/blogs/google-next-2025-agentic-ai-stack-multimodality-and-sovereignty/#:~:text=,SCC)). The goal is to let enterprises build **“enterprise agentic AI”** solutions by linking reasoning modules, memory stores, and tools into an architecture, and crucially by enabling agents to talk to each other across different services ([Google Cloud Next 2025: Agentic AI Stack, Multimodality, And Sovereignty](https://www.forrester.com/blogs/google-next-2025-agentic-ai-stack-multimodality-and-sovereignty/#:~:text=,SCC)) ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=and%20leverage%20A2A%20to%20communicate,the%20remote%20agent%2C%20Google%20said)). Google’s A2A is an open standard so that, for instance, a Salesforce CRM agent could coordinate with a ServiceNow IT agent securely ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=%E2%80%9CUsing%20A2A%2C%20agents%20can%20publish,of%20Cloud%20AI%20at%20Google)) ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=model%2C%20the%20A2A%20protocol%C2%A0focuses%20on,interaction%20between%20different%20AI%20agents)). This focus on interoperability shows the push toward *ecosystems* of agents (so you’re not locked into one vendor’s agent). Google DeepMind has also discussed theoretical distinctions between “agentic AI” vs “AI agents” on platforms like YouTube, reinforcing the definitions: agentic AI as systems of collaborating agents, and AI agents as the individual actors. - **IBM** has been working on **Watsonx Orchestrate**, an enterprise product that deploys AI agents to automate business tasks. IBM explicitly uses the term *“agentic AI”* in its marketing, and defines it similarly as AI that *“autonomously makes decisions and acts to pursue complex goals with minimal supervision”* ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=To%20fully%20understand%20AI%20agent,complex%20goals%20with%20minimal%20supervision)). IBM’s approach often involves a central *Orchestrator* agent that routes tasks to domain-specific agents – essentially implementing agentic workflows for things like HR processes or IT support ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=In%20practice%2C%20AI%20agent%20orchestration,are%20run%20seamlessly%20and%20efficiently)) ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=For%20example%2C%20as%20part%20of%C2%A0customer,%E2%80%9CTypes%20of%20AI%20orchestration%E2%80%9D%20below)). This reflects a convergence of AI and business process automation: rather than writing static scripts for each workflow, you employ flexible agents that can figure out how to execute a process, perhaps even learning and improving over time. Philosophically, the evolution toward agentic AI has reawakened discussions about **agency** in machines – an area where AI intersects with ethics and safety. Early AI pioneers like Norbert Wiener or Alan Turing speculated on machines acting with purpose, and modern theorists debate how to ensure *aligned* agency (so that an autonomous AI’s goals remain consistent with human intentions). As AI agents become more capable, questions about control and trust loom larger. Indeed, one promise of agentic AI is that these systems might be *more trustworthy* in certain ways: because an agent can explain its reasoning or check its work using tools, it could reduce problems like hallucination ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=AI%20agents%20are%20generally%20better,%E2%80%9D)). However, giving AI the freedom to act also introduces risks of error or misuse if not properly constrained. This is why much current research into agentic workflows includes an emphasis on **guardrails, monitoring, and human oversight** – effectively, combining autonomy with accountability. In summary, the concept of AI agents has progressed from simple sensing-acting loops in the 1960s, through knowledge-based and multi-agent systems in the late 20th century, to learning-based and now **LLM-powered agents** in the 2020s. Each era added layers of sophistication: memory, learning, collaboration, and now the generality of language understanding. The current discourse inherits all these ideas, framing them in the context of generative AI’s vast capabilities. ## The Current Landscape: Platforms, Toolkits, and Practices As of 2025, the AI community is rich with frameworks and platforms for building agents and agentic workflows. This ecosystem ranges from open-source libraries to enterprise cloud services, all aiming to make it easier to create AI systems that **plan, collaborate, and act**. Below we survey some of the significant players and how they reflect broader trends: - **LangChain and LangGraph:** *LangChain* became popular in 2023 as a Python toolkit for chaining together LLM calls and tools (hence “Chain”). It introduced abstractions for **agents** (LLM-backed decision-makers that can pick tools) versus **chains** (fixed sequences) ([AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain | by Sandi Besen | TDS Archive | Medium](https://medium.com/data-science/ai-agent-workflows-a-complete-guide-on-whether-to-build-with-langgraph-or-langchain-117025509fa0#:~:text=LangChain)). As the need grew for more complex, robust agent workflows, the creators developed *LangGraph*, a framework to design and manage multi-step AI workflows at scale. LangGraph provides infrastructure for long-term memory, state management, and parallel tool usage – essentially making it easier to build *agentic applications* beyond simple chatbots. As the LangChain team puts it, *“the next chapter in building complex production-ready features with LLMs is agentic”*, and LangGraph is built to support that ([LangGraph](https://www.langchain.com/langgraph#:~:text=%E2%80%9CLangChain%20is%20streets%20ahead%20with,%E2%80%9D)). With LangGraph, developers can orchestrate background research jobs, handle branching tool calls, and maintain conversational state over long sessions ([LangGraph](https://www.langchain.com/langgraph#:~:text=Dynamic%20APIs%20for%20designing%20agent,experience)). This reflects a trend of moving from prototyping to production: tools like LangGraph emphasize **fault tolerance, monitoring, and scalability** for agent-based apps ([LangGraph](https://www.langchain.com/langgraph#:~:text=Fault)) ([LangGraph](https://www.langchain.com/langgraph#:~:text=Deploy%20agents%20at%20scale%2C%20monitor,carefully%2C%20iterate%20boldly)). In practice, LangChain/LangGraph have been used to create everything from AI assistants that can use calculators or search engines on the fly, to complex chatbots that consult databases and update records during a conversation. - **Open-Source Autonomous Agents (AutoGPT, BabyAGI, etc.):** The open-source community has birthed numerous agent implementations riding on GPT-4/3.5 APIs. **AutoGPT** is among the most famous – as noted, it strings together the “thoughts” of an LLM to attempt autonomous task completion ([ChatGPT, Next Level: Meet 10 Autonomous AI Agents: Auto-GPT ...](https://medium.com/the-generator/chatgpts-next-level-is-agent-ai-auto-gpt-babyagi-agentgpt-microsoft-jarvis-friends-d354aa18f21#:~:text=ChatGPT%2C%20Next%20Level%3A%20Meet%2010,to%20autonomously%20achieve%20whatever)) ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=natural%20language%20processing%20,3.5)). AutoGPT introduced many to the concept of *self-chaining*: it generates a plan, executes a step, evaluates progress, and iterates. IBM’s review describes AutoGPT as a *“platform that allows users to automate multistep projects and complex workflows with AI agents”*, using GPT-4 to break down a goal into sub-tasks and solve them sequentially ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=AutoGPT%20is%20an%20open,3.5)). Interestingly, although named “AutoGPT,” it often spins up **multiple sub-agents** internally (for example, one might act as a brainstormer, another as an executor), effectively becoming a *multi-agent* system ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=AutoGPT%20is%20an%20example%20of,include%20crewAI%2C%20LangGraph%20and%20AutoGen)). **BabyAGI**, on the other hand, is a simpler task management loop – it keeps a list of objectives, generates new tasks as others are completed, and reprioritizes. These projects showed what’s possible but also revealed challenges: they were prone to getting stuck in loops or making frivolous actions without robust guardrails. Nonetheless, they spurred a wave of innovation. Dozens of variations (AgentGPT, SuperAGI, etc.) appeared, each tweaking the recipe – adding vector databases for memory, integrating feedback loops to self-correct, or connecting to APIs like web browsers, file systems, or even operating system commands. The open-source agent movement demonstrated **community enthusiasm for agentic AI**, and many ideas from these projects have flowed into more formal libraries. - **Enterprise Multi-Agent Platforms (CrewAI, AutoGen):** Beyond individual hobby projects, startups and large companies alike have built platforms dedicated to multi-agent orchestration. **CrewAI** is one example of an open-source framework (created by João Moura and team) that specifically focuses on *“crews” of AI agents working together*. CrewAI uses the metaphor of a *team*: you define agents with distinct **roles, skills, and goals**, then define how they should collaborate ([A Complete Guide to CREW AI and Agentic Frameworks: Unleashing the Power of Autonomous AI Crews | by Harsha Vanukuri | Medium](https://medium.com/@harshav.vanukuri/a-complete-guide-to-crew-ai-and-agentic-frameworks-unleashing-the-power-of-autonomous-ai-crews-9911f39110f5#:~:text=CrewAI%20elevates%20the%20concept%20of,solving%20much%20more%20complex%20challenges)). For instance, CrewAI might coordinate a *Researcher Agent*, a *Writer Agent*, and a *Critic Agent* to produce a report: the researcher gathers data, the writer drafts content, and the critic reviews and refines it ([A Complete Guide to CREW AI and Agentic Frameworks: Unleashing the Power of Autonomous AI Crews | by Harsha Vanukuri | Medium](https://medium.com/@harshav.vanukuri/a-complete-guide-to-crew-ai-and-agentic-frameworks-unleashing-the-power-of-autonomous-ai-crews-9911f39110f5#:~:text=For%20example%2C%20imagine%20a%20research,sequence%20or%20even%20in%20parallel)). This is an explicit implementation of an agentic workflow, with the framework handling communication between the agents and the sharing of intermediate results. Such a setup mimics a human team’s division of labor, illustrating how breaking a complex task into specialized sub-tasks can increase efficiency and quality. The broader notion here is **collaborative intelligence** – multiple narrow AIs can collectively tackle what a single general AI might struggle with. Another toolkit, **AutoGen** (from Microsoft Research), follows a similar philosophy: it enables conversations and coordination among multiple LLM agents and tools in a programmable way. In fact, a Medium article comparing frameworks noted that LangChain, LangGraph, and AutoGen each take slightly different approaches but aim to solve the same core problem: *how to orchestrate dynamic, non-linear workflows driven by AI* ([AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain | by Sandi Besen | TDS Archive | Medium](https://medium.com/data-science/ai-agent-workflows-a-complete-guide-on-whether-to-build-with-langgraph-or-langchain-117025509fa0#:~:text=Since%20the%20practice%20of%20widely,might%20not%20be%20true%20tomorrow)). What these platforms share is an emphasis on modularity (different agents for different functions) and on **higher-level control** (providing a “manager” or orchestrator that can spawn or switch between agents as needed). - **Major Cloud Providers and API Integrations:** The big cloud AI providers are incorporating agentic concepts into their offerings. We’ve mentioned Google’s Agent SDK and A2A protocol. Similarly, **Amazon Web Services (AWS)** has introduced *Bedrock Agents*. AWS published guidance on using LangChain or CrewAI with its Bedrock service (which hosts various foundation models) to build multi-agent systems ([Unlock the Future of Multi-Agent AI Workflows with CrewAI ...](https://sambanova.ai/blog/multi-agent-ai-workflows-with-crewai-and-sambanova#:~:text=Unlock%20the%20Future%20of%20Multi,that%20deliver%20many%20advanced%20capabilities)) ([Introduction - CrewAI](https://docs.crewai.com/introduction#:~:text=CrewAI%20Crews%3A%20Optimize%20for%20autonomy,specific%20roles%2C%20tools%2C%20and%20goals)). Essentially, AWS is encouraging customers to combine large models with custom agents to automate business workflows. One AWS blog forecasts that 25% of enterprises using generative AI will deploy AI agents in 2025 (rising to 50% by 2027) ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/#:~:text=The%20enterprise%20AI%20landscape%20is,transformative%20potential%20of%20these%20technologies)), and it showcases how agentic systems can handle tasks like code review or supply chain optimization in a more adaptive way than traditional software ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/#:~:text=systems%20with%20domain%20knowledge,an%20agentic%20system%20can%20use)) ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/#:~:text=management%2C%20where%20traditional%20inventory%20systems,of%20an%20agentic%20AI%20system)). This shows a recognition that agentic AI is *commercially important*. Microsoft’s Azure OpenAI Service has also been evolving to support functions and tool use, implicitly enabling agent behavior (e.g., an Azure OpenAI “function call” can let the model decide to invoke a calculation or database query mid-conversation). Moreover, Microsoft has been working on a protocol called **Model-Context Prompt (MCP)**, which, while not exactly multi-agent, allows a standardized way for an AI to call external operations (Anthropic has a similar idea – these can complement agent-to-agent protocols like A2A ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=match%20at%20L284%20The%20A2A,complement%20each%20other%2C%20analysts%20said))). The takeaway is that cloud platforms are adding the **plumbing for agentic workflows**, so developers can combine multiple AI and non-AI components more seamlessly. - **OpenAI’s Function Calling and Plugins:** Although OpenAI’s ChatGPT started as a pure conversational tool, it evolved to include **plugin APIs** (e.g. for web browsing, code execution, etc.) and a function-calling interface. These essentially turn ChatGPT into an agent that can act beyond just chatting – it can invoke a web search when needed, or execute Python code to do math or data analysis. When you use ChatGPT’s browsing plugin, for example, the system decides autonomously when to search the web to gather information, then resumes answering – a simple agentic workflow. Similarly, in OpenAI’s developer platform, you can define functions (like `get_weather(location)`) and the model will learn to call them if a query requires it. This mechanism was a major step as it **enabled dynamic tool use** driven by the model’s own reasoning. Many current agent frameworks actually piggyback on this: they define a suite of “tools” (APIs for Google Search, a calculator, file system, etc.) and let the LLM choose among them to solve user requests. This concept was inspired by academic work (like ReAct), but OpenAI’s implementation made it mainstream. It also blurred the line between a “chatbot” and an “agent”: with tools, even a single-turn answer might hide an agentic process (for instance, GPT-4 might silently do a web search and read results before replying, which is an autonomous action sequence). OpenAI has signaled that more advanced agent capabilities (like writing and executing code to solve problems, a form of self-augmentation) are areas of active development, albeit with caution given the possible risks. - **Specialized Use-Case Agents:** We also see various domain-specific or task-specific agents emerging. For example, **code-writing and code-review agents** (such as those by Diffblue or at AWS mentioned earlier) focus on software development tasks ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=On%20the%20Stack%20Overflow%20podcast%2C,system%20for%20his%20small%20org)). **Scientific research agents** aim to help in literature review or even hypothesis generation. **Personal AI assistants** like Inflection’s Pi or Replika are starting to incorporate more goal-driven behavior (like proactively suggesting actions or managing schedules, not just chatting). And companies like Adept are working on agents that can use existing software (e.g., reading a UI and clicking buttons on your behalf, essentially a supercharged RPA bot with AI brains). All these are manifestations of the core idea: an AI that doesn’t just output information, but can *take action in some environment*. The environments can be varied – a web browser, an IDE, a business application, a robot in the physical world – but the conceptual convergence is striking. Importantly, there’s a continuum of how *autonomous* these systems are. Some agentic workflows are designed to always keep a human in the loop (for example, an AI drafts an email and waits for a human to approve before sending). Others can run to completion on their own unless a human intervenes. There’s also a continuum in complexity: from a single agent that occasionally uses one tool, up to large-scale multi-agent ecosystems. The current landscape covers this entire spectrum. What unites it is the pursuit of **making AI more **useful by having it carry out extended tasks, not just single responses. To summarize the trends reflected by these platforms and tools: - **Generality vs Specialization:** Many frameworks encourage specialized agents (each good at one thing) working together, rather than one giant agent that does it all. This mirrors how microservices work in software architecture and how humans specialize in organizations. It can also be a hedge against limitations of any single model – one agent might use GPT-4 for reasoning, another might use a smaller model fine-tuned for a particular skill (say, parsing code), achieving efficiency and accuracy by division of labor ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=agent%20collaboration%20support)). - **Memory and Persistence:** Agentic systems often need to retain context over time (long-term memory of past interactions, results, or knowledge). We see integrations with vector databases or state management APIs to give agents a memory beyond the current session ([LangGraph](https://www.langchain.com/langgraph#:~:text=Dynamic%20APIs%20for%20designing%20agent,UXs)) ([LangGraph](https://www.langchain.com/langgraph#:~:text=Dynamic%20APIs%20for%20designing%20agent,experience)). This addresses one-off shortcomings of plain LLMs which have context length limits. - **Tool Use as First-Class:** It’s now an expectation that an AI agent will use external tools or data sources to overcome knowledge cutoff and computation limits. Every major toolkit provides some mechanism for defining tools the agent can call. This has become a fundamental part of what it means to be an agent in the LLM era: *knowing when and how to fetch information or take an action via an API*. Agents that can’t use tools are generally less trusted for real tasks, because they are prone to hallucinate or be limited by their training data. The ReAct and Toolformer research established that coupling language reasoning with API/tool use is key to reliability ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=synergy%20between%20the%20two%3A%20reasoning,like)). - **Human Interaction and UX:** Platforms like LangGraph and IBM Orchestrate emphasize designing good user experiences around agents – e.g., how a user can steer an agent mid-task, or how to display an agent’s chain-of-thought in a UI for transparency ([LangGraph](https://www.langchain.com/langgraph#:~:text=Craft%20personalized%20experiences%20with%20the,step%20work)) ([LangGraph](https://www.langchain.com/langgraph#:~:text=Craft%20personalized%20user%20experiences%20with,step%20work)). There’s recognition that users need to *trust* agentic systems, so exposing some of their rationale or giving users control (pause/adjust) is important. We also see efforts to standardize how agents communicate outcomes or ask for help from users. - **Governance and Safety:** Because agentic workflows carry higher risk (an autonomous agent might do something undesired or reveal confidential info), current systems build in guardrails. OpenAI’s functions are constrained (the AI can’t call arbitrary OS commands unless allowed). Cloud providers integrate **monitoring** – e.g., Azure’s tools to trace each step an agent takes, or LangSmith (by LangChain) for logging and debugging agent behaviors. Google’s A2A explicitly is not a blanket free-for-all; it will likely integrate with identity and access controls so that agents only do what they’re permitted to. Expect increasing development of **“policy engines”** for AI agents, akin to how enterprises manage human or software process permissions. Overall, the landscape is rapidly evolving. New libraries and extensions pop up almost weekly. But the trajectory is clear: agentic capabilities are moving from the fringe (cool demos on Twitter) to the mainstream of AI development. Even for those who primarily care about *generative* AI outputs (text, images, etc.), understanding agent frameworks is becoming important, because *content creation itself can be orchestrated by agentic processes*. For instance, generating a long report might involve an agent that decides to first gather data from various sources (perhaps querying an LLM for each sub-topic, assembling facts), then another agent that outlines the report structure, then one that writes each section – all autonomously. This means the future of generative AI is tightly interwoven with the concept of agency. ## Implications for Education and Curriculum The rise of agentic AI has significant implications for how we educate future professionals and how learning itself is delivered. In terms of **curriculum**, universities and training programs are beginning to incorporate these concepts so that students understand not just how to use AI tools, but how to **design and manage autonomous AI systems**. For example, courses on AI engineering now cover frameworks like LangChain or prompt engineering patterns for agents, whereas a few years ago they might have focused only on model building. We even see new specializations emerging: a Coursera specialization on *“Agentic AI and AI Agents for Leaders”* promises to equip business leaders to leverage these technologies ([Agentic AI and AI Agents for Leaders Specialization - Coursera](https://www.coursera.org/specializations/ai-agents-for-leaders#:~:text=Agentic%20AI%20and%20AI%20Agents,)). From a computer science perspective, agentic systems combine elements of AI, software engineering, and human-computer interaction. So curricula are adapting to cover multidisciplinary skills: students need to learn about LLM prompting, yes, but also about classical AI planning algorithms, reinforcement learning basics, multi-agent coordination strategies, and ethical governance. Essentially, *AI education is shifting from just training models to orchestrating whole solutions*. Understanding how an AI agent decides on actions (and how that can go wrong) is becoming as important as understanding how a neural network learns from data. For the learners themselves (in primary, secondary, or higher ed), agentic AI might transform the learning process. We already have generative AI tutors like Khan Academy’s **Khanmigo** that can adapt to a student’s queries. Agentic AI could power more **interactive and personalized tutoring systems** that do more than just answer questions. An agentic tutor might autonomously create a lesson plan tailored to a student, quiz them, adapt the difficulty based on real-time performance, and even liaise with a teacher’s system (reporting where the student struggles). Early experiments are pointing this way: *“agentic AI models can autonomously design lesson plans, adjust learning paths dynamically, and predict student performance”*, as one EdTech commentary notes ([AI & Agentic AI in Education: Shaping the Future of Learning - Medium](https://medium.com/accredian/ai-agentic-ai-in-education-shaping-the-future-of-learning-1e46ce9be0c1#:~:text=Medium%20medium,time)). This could alleviate some burdens on teachers by automating routine tutoring and grading tasks, allowing teachers to focus on higher-level mentoring. Educators are also exploring using agents to assist with **administrative tasks**. For example, the Salesforce *“Agentforce”* platform (2025) is helping educational institutions deploy agents to streamline operations ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=Image%3A%20Illustration%20of%20mountain)). One use case from their cohort: a large public school district is developing an AI agent to *“streamline curriculum development and accelerate distribution of teacher-created materials”* ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=%29%20childrenfirstfund,high%20school%20students%20for%20post)). Another nonprofit is using an agent to automate generating grant reports ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=,powered%20agent%20that%20streamlines%20curriculum)). These examples show that agentic systems can handle the often overwhelming paperwork and coordination tasks in education, which means administrators and educators could redirect their time to direct student interaction or strategic planning. For higher education institutions, incorporating agentic AI also means updating policies and guidance on AI use. If students have access to personal agents that can do research or even complete assignments, educators must rethink assessments to ensure learning outcomes are met honestly. On the positive side, students could use research agents as *learning companions* – imagine a history student tasking an AI agent to gather diverse sources on a topic, then debating the findings with the agent. This kind of active learning, facilitated by an AI, could deepen engagement. Finally, the presence of agentic AI in education raises questions of digital literacy and ethics: we will need to teach students *how to work alongside AI agents*. This includes knowing how to give high-level goals to an AI, how to verify and critique the agent’s outputs, and how to incorporate AI-driven insights responsibly. Just as “internet literacy” became crucial in the 2000s, *“AI agent literacy”* might become a key competency. Some have advocated for introducing basic AI and agent concepts even at the K-12 level, so that by the time students reach college or the workforce, they’re comfortable treating AI as collaborators rather than just tools. Education might shift to emphasize what humans do best – creativity, critical thinking, interpersonal skills – while leveraging agents for rote learning and information retrieval. In summary, curricula are evolving to teach *building* with agentic AI, and educational practice is evolving to *use* agentic AI for personalized learning and administrative efficiency. The net effect could be a more tailored, efficient, and interactive educational experience, but it will require careful integration and training so that human educators and AI agents complement each other’s strengths. ## Implications for Research Methodologies Scientific and scholarly research stands to be significantly influenced by agentic AI systems. Research often involves repetitive but complex workflows: literature reviews, data collection and analysis, running experiments, writing and reviewing papers, etc. AI agents can serve as tireless research assistants, executing many of these steps at speeds and scales difficult for humans alone. One immediate impact is on **literature review and knowledge synthesis**. An AI agent can be tasked with scouring academic databases for relevant papers, reading them (thanks to language understanding), and extracting key points or even compiling summaries. Unlike a simple search engine, an agent can iteratively refine its search – for instance, identify a gap or a debate in the literature and then specifically look for data that addresses it. We already see early tools where a researcher can say, “Agent, find all papers since 2020 that cite Theory X and summarize their findings,” and the agent will attempt to do just that. While these agents are not perfect and might miss nuances, they can vastly accelerate the preliminary research phase. They embody a *research workflow* (search -> evaluate relevance -> summarize) that would typically take a human days or weeks, and do it in hours. In more empirical sciences, agentic workflows might help design and even execute experiments. For example, in software engineering research, an agent could generate code variants to test a hypothesis, run them, analyze the results, and decide the next experiments to run. This is analogous to automated lab robotics in chemistry or biology, but now extended to any process that can be automated via software. A notable instance is in **code testing**: the Stack Overflow blog mentioned an interview where Diffblue engineers use agentic AI to *“test complex code at scale”*, essentially letting AI agents generate and run tests on huge codebases ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=On%20the%20Stack%20Overflow%20podcast%2C,system%20for%20his%20small%20org)). Similarly, AWS used agentic methods to automate refactoring of a massive codebase (saving thousands of developer hours) ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=On%20the%20Stack%20Overflow%20podcast%2C,system%20for%20his%20small%20org)). In scientific data analysis, an AI agent might autonomously try different statistical analyses or machine learning models on a dataset, find the best fit, and even write up the findings. This points to a future where research methodologies incorporate **AI-driven automation** at many stages. The role of the human researcher may shift more toward defining problems and interpreting results, while the grunt work of searching, calculating, optimizing, and even writing first drafts can be offloaded to agents. We already entrust calculations to computers; entrusting higher-level tasks to agents is the next logical step. For instance, some scientists are experimenting with having an agent continuously monitor new publications in their niche and alert them when something important comes up (beyond simple keyword alerts – the agent can judge *importance* or *novelty* in context). Others have used agents to assist in writing grant proposals by gathering necessary background data and ensuring all required sections are populated with relevant info. However, integrating agentic AI in research also demands **new methodologies for validation**. If an AI agent suggests an experiment or produces a draft result, researchers need protocols to verify the correctness and credibility. This is analogous to how we validate the output of any computational tool (like double-checking a simulation’s results), but now the AI’s scope is broader. Keeping a *human in the loop* is often wise: an agent might generate a plausible-sounding but flawed interpretation of data, so human oversight is crucial. One can imagine a future “AI Research Assistant” that always works alongside a human principal investigator, with a clear understanding that the AI handles volume and speed, while the human ensures quality and insight. Agentic AI might also change **collaboration in research**. Multi-agent systems could simulate brainstorming sessions or multi-perspective analysis. You could deploy a *crew of agents* on a research problem – for example, one agent takes a devil’s advocate stance, another is tasked with finding methodological flaws, another generates creative solutions – effectively emulating a collaborative team of researchers. This could be particularly useful in interdisciplinary research, where one agent might “specialize” in, say, the biology aspect of a problem, another in the chemistry aspect, and they communicate to find integrated answers. From the perspective of research training, upcoming scientists will need to learn how to leverage these AI collaborators. It’s foreseeable that PhD programs will include modules on using AI tools to conduct literature reviews or experiments. “Knowing your tools” in research will extend to knowing what AI agents can and cannot do for you. On the flip side, the availability of agentic AI could democratize research to an extent. Individuals or small teams with limited resources might compete with larger labs by using AI agents to amplify their productivity. A single person with a powerful suite of agents could potentially survey a field and prototype solutions as effectively as a traditional team. This is speculative, but it echoes the impact of personal computing: tasks that once required a support staff can now be done by one person with a laptop; soon, one person with an AI partner might do what used to require a research group. ## Implications for Workplace and Knowledge Work Agentic AI systems are poised to transform knowledge work and professional practices across industries. In essence, any job that involves *“knowledge + action”* – gathering information, making decisions, and executing tasks – can be supported or partially automated by AI agents. This doesn’t necessarily mean replacing humans, but rather **augmenting** them or handling ancillary tasks so that humans can focus on higher-level work. One clear impact is in the realm of **business workflows and operations**. Consider a typical business process, like processing an insurance claim or onboarding a new employee. Traditionally, these involve many steps across forms, databases, communications, and approvals. An agentic system can take on much of this coordination. For example, an AI agent in an HR department might automatically prepare all onboarding documents for a new hire, schedule their training sessions, set up accounts, and so on, by interacting with the relevant IT systems and calendars. Many companies are already building such digital workers. Doozer AI (mentioned in context of A2A) describes itself as an *“agentic digital worker platform,”* suggesting businesses will deploy fleets of AI employees to handle routine operations ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=match%20at%20L220%20The%20interoperability,an%20agentic%20digital%20worker%20platform)). In customer service, instead of a single chatbot answering FAQs, we might have a set of agents handling the entire customer journey. One agent engages with the customer in natural language, another agent pulls up customer data and order history from internal databases, another perhaps interfaces with logistics to check on an order status, and together they resolve the inquiry seamlessly. Because these agents can work 24/7 and scale as needed, customer support could become faster and more personalized. Salesforce’s research found **77% of college students would use AI agents for tasks like time management or course registration** ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=digital%20channels.%20,academic%20advising%2C%20and%20course%20registration)) – translating that to enterprise, many employees and customers are similarly open to AI assistance for mundane tasks if it’s effective. Knowledge workers (like analysts, consultants, lawyers, marketers) will likely incorporate AI agents into their daily routines. Take a financial analyst: they could have an agent that continuously monitors market news and data feeds, filters significant changes, and even executes simple trades or alerts the analyst when conditions meet certain criteria. In consulting, an agent could automate gathering industry benchmarks and assembling slides from templates, giving human consultants more time to devise strategies. In software development, as earlier noted, agents can handle code reviews, generate boilerplate code, or manage devOps pipelines autonomously ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=Another%20set%20of%20use%20cases,specified%20by%20the%20human%20user)) ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=On%20the%20Stack%20Overflow%20podcast%2C,system%20for%20his%20small%20org)). Moveworks, an enterprise AI company, notes that agentic AI can *“turn software development into a collaborative process in which the AI agent executes against the goal and constraints specified by the human user”*, effectively acting as an intelligent junior developer ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=Another%20set%20of%20use%20cases,specified%20by%20the%20human%20user)). The human developer moves into a supervisory and creative role, overseeing multiple AI agents that write and test code. One interesting emerging practice is the idea of a **“chief agent” or AI project manager**. If you have multiple AI agents working on different sub-tasks in a project (say, building a website: one writes content, one designs layout, one codes), you might have another agent overseeing the project – ensuring everything is consistent and on schedule, much like a human project manager would. Some experimental setups have even tried this, where GPT-4 agents were assigned roles like “CEO”, “CTO”, “Engineer”, etc., and tasked to collaborate on a fictitious company task. Results are mixed at this stage, but it’s a compelling vision of workplace automation: a hierarchy of AI agents mirroring a human organization structure. OpenAI’s “GPTs” (announced in late 2023 as customized ChatGPT-based agents for specific tasks) hint at this idea of many bespoke agents handling parts of a workflow. For employees, working alongside AI agents will become commonplace. Just as today many people interact with AI (like autocomplete or recommendation systems) without marveling at it, tomorrow a worker might routinely delegate subtasks to an AI agent: “Agent, draft me a report on last quarter’s sales trends” or “Agent, coordinate finding a meeting time for 10 people next week.” In fact, scheduling meetings is a great example – we already had AI schedulers like x.ai’s Amy a few years back (a narrow agent that handled emails to set up meetings). Now with more advanced agentic AI, such tasks can be done more robustly and with contextual understanding (e.g., knowing which meetings are high priority vs. deferrable). The **future of knowledge work** will likely emphasize human judgment, empathy, and creative problem-solving, complemented by AI agents handling the grind of data processing and routine decision-making. Many roles may be redefined as *“AI-enabled roles”*. For instance, a financial auditor might primarily supervise AI agents that comb through transactions for anomalies, only digging in personally when the agent flags something unusual that requires nuanced assessment. A marketing specialist might spend more time crafting high-level campaign ideas and let an AI crew test hundreds of ad variants across platforms and report back on what works. We’re also seeing companies measure productivity in terms of how well humans leverage AI. It’s plausible that in performance reviews, a worker’s effectiveness in utilizing AI tools could be a factor. Those who figure out optimal workflows with their AI co-workers will likely outperform those who don’t – analogous to how workers who embraced computers outperformed those who stuck to typewriters. From an organizational perspective, agentic systems pose some challenges: oversight (you need logs of what agents did, for compliance), security (agents might have access to sensitive systems to do their job, so authenticating and limiting that access is critical), and training (workers need to be trained to work with agents, and agents may need “onboarding” to a company’s processes as well). We might even see new roles like an “AI workflow trainer” or “agent supervisor” whose job is to set up these agentic processes correctly and monitor them, somewhat similar to an RPA (Robotic Process Automation) engineer today. Finally, at a macroeconomic level, if agentic AI significantly boosts productivity in knowledge sectors, it could drive major shifts. Some routine jobs may be diminished, while new jobs emerge. Companies may need fewer people for certain tasks but might generate new roles focusing on higher-level value creation (or on developing and maintaining the AI workforce). Ensuring that this transition is smooth will require thoughtful change management and possibly upskilling programs – essentially training the existing workforce to harness agentic AI rather than be displaced by it. Historically, tool advances (from spreadsheets to the internet) have generally elevated the nature of work rather than eliminated it, but the pace and extent of AI’s impact could be unprecedented. To wrap up, agentic AI holds the promise of a workplace where humans and AI systems collaborate closely: AI agents handling the busywork and providing decision support, while humans guide the overall objectives and handle the ambiguous, interpersonal, or creative aspects that AI still struggles with. Work could become more about supervising flows of intelligent automation and injecting human insight where needed. In this scenario, productivity could soar, and job satisfaction might even increase if workers are freed from drudgery – but realizing this outcome will require rethinking job design and ensuring humans remain **in control and in the loop** where it truly matters. ## Conclusion The emergence of agents and agentic workflows marks a pivotal chapter in the evolution of AI. We have moved beyond seeing AI as simply a tool that generates outputs, to envisioning AI as a **partner that can plan, decide, and act** within complex processes. This shift is underpinned by advances in generative models, which grant machines a broad and flexible understanding of tasks, and by engineering innovations that let us chain and coordinate AI capabilities in sophisticated ways. The historical journey from Shakey the robot to today’s AutoGPTs and Agent2Agent protocols is a story of steadily increasing AI autonomy and coordination – a trajectory that shows no sign of slowing. In practical terms, agentic AI is already changing how we build software, how we conduct research, how we run businesses, and how we learn. Organizations that harness these technologies effectively may gain significant advantages in efficiency and innovation. At the same time, the widespread deployment of autonomous AI agents raises important questions. How do we ensure these agents remain aligned with human values and objectives? What new oversight and governance mechanisms do we need? How do we prepare the workforce to collaborate with AI, and students to enter a world where such collaboration is the norm? Addressing these questions will require input from technologists, educators, policymakers, and ethicists alike. There will likely be trial and error – early agentic systems will occasionally fail or behave unpredictably, prompting refinements in design and policy. But the momentum suggests that agentic workflows will become as ubiquitous in the digital landscape as web apps or cloud services are today. Major tech companies are actively open-sourcing tools, establishing standards, and sharing research to accelerate progress while aiming for safety ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=Andersen%20said%20the%20SDK%20is,agent%20collaboration%20support)) ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=match%20at%20L284%20The%20A2A,complement%20each%20other%2C%20analysts%20said)). The community is coalescing around best practices (like keeping humans “in the loop” for critical decisions, logging agent decision trails for auditability, and sandboxing high-risk actions). In the coming years, we can expect **more seamless integration of agents into daily life**. Your personal devices might host a constellation of agents – one managing your health appointments, another optimizing your home energy use, another tutoring your child – all possibly coordinating with each other when needed. On a societal scale, agentic AI could help tackle complex challenges by rapidly analyzing and acting on information (imagine disaster response aided by swarms of information-gathering and logistics-coordinating agents). The possibilities are vast. In conclusion, understanding agents and agentic workflows is increasingly crucial for anyone involved in AI development or utilization. These concepts encapsulate the drive towards AI systems that are not only intelligent, but *action-oriented and collaborative*. By learning from the past (the foundational theories of agency), observing the present (the platforms and projects leading the way), and carefully shaping the future (through thoughtful integration into human endeavors), we can leverage agentic AI to amplify human potential. The goal is not AI for its own sake, but AI that works *with us* and *for us* – as autonomous as it needs to be, but always in service of human-defined goals. Achieving that balance will define the success of this new era of agentic AI. **Sources:** - Russell & Norvig’s definition of an AI agent ([Norvig's Agent Definition](https://www.linkedin.com/pulse/norvigs-agent-definition-matt-rickard-msmcf#:~:text=In%201995%2C%20Stuart%20J,Artificial%20Intelligence%3A%20A%20Modern%20Approach)) - Wooldridge’s criteria for agency ([Intelligent Agents: Exploring Definitions and Bridging Classical and Modern Views | by Makbule Gulcin Ozsoy | Medium](https://medium.com/@makbule.ozsoy_73232/intelligent-agents-exploring-definitions-and-bridging-classical-and-modern-views-b1a97a1514e2#:~:text=Agents%2C%20under%20the%20weak%20definition%2C,directed%20behaviour)) - Stack Overflow Blog – *What is agentic AI?* ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=Caveats%20aside%2C%20agentic%20AI%20refers,problems%20on%20a%20user%E2%80%99s%20behalf)) ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=1,in%20furtherance%20of%20their%20goals)) - Medium (TDS) – *LangChain vs LangGraph* (agent vs chain) ([AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain | by Sandi Besen | TDS Archive | Medium](https://medium.com/data-science/ai-agent-workflows-a-complete-guide-on-whether-to-build-with-langgraph-or-langchain-117025509fa0#:~:text=LangChain)) - Leo Rover – Shakey the Robot’s capabilities ([Leo Rover Blog - What was the world’s first mobile intelligent robot?](https://www.leorover.tech/post/what-was-the-worlds-first-mobile-intelligent-robot#:~:text=Shakey%20was%20the%20first%20mobile,a%20course%20to%20avoid%20obstacles)) - ArXiv – *ReAct: Synergizing Reasoning and Acting in LLMs* ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=,named%20ReAct%2C%20to%20a%20diverse)) ([[2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629#:~:text=set%20of%20language%20and%20decision,ALFWorld%20and)) - IBM – *What is AutoGPT?* (AI agents definition) ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=What%20are%20AI%20agents%3F)) ([What is AutoGPT? | IBM](https://www.ibm.com/think/topics/autogpt#:~:text=AutoGPT%20is%20an%20example%20of,include%20crewAI%2C%20LangGraph%20and%20AutoGen)) - InfoWorld – OpenAI’s Agents SDK (Swarm, workflow orchestration) ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=The%20open%20source%20Agents%20software,enterprises%20have%20already%20adopted%20it)) ([OpenAI takes on rivals with new Responses API, Agents SDK | InfoWorld](https://www.infoworld.com/article/3844348/openai-takes-on-rivals-with-new-responses-api-agents-sdk.html#:~:text=Andersen%20said%20the%20SDK%20is,agent%20collaboration%20support)) - InfoWorld – Google’s Agent2Agent protocol (agent interoperability) ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=The%20interoperability%20being%20offered%20by,an%20agentic%20digital%20worker%20platform)) ([Google’s Agent2Agent open protocol aims to connect disparate agents | InfoWorld](https://www.infoworld.com/article/3958032/googles-agent2agent-open-protocol-aims-to-connect-disparate-agents.html#:~:text=model%2C%20the%20A2A%20protocol%C2%A0focuses%20on,interaction%20between%20different%20AI%20agents)) - IBM – *AI agent orchestration* (continuum from chatbots to agentic AI) ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=AI%20assistants%20exist%20on%20a,This%20is%20agentic%20AI)) ([What is AI Agent Orchestration? | IBM](https://www.ibm.com/think/topics/ai-agent-orchestration#:~:text=In%20practice%2C%20AI%20agent%20orchestration,are%20run%20seamlessly%20and%20efficiently)) - Medium – CrewAI (collaborative multi-agent “crews”) ([A Complete Guide to CREW AI and Agentic Frameworks: Unleashing the Power of Autonomous AI Crews | by Harsha Vanukuri | Medium](https://medium.com/@harshav.vanukuri/a-complete-guide-to-crew-ai-and-agentic-frameworks-unleashing-the-power-of-autonomous-ai-crews-9911f39110f5#:~:text=CrewAI%20elevates%20the%20concept%20of,solving%20much%20more%20complex%20challenges)) ([A Complete Guide to CREW AI and Agentic Frameworks: Unleashing the Power of Autonomous AI Crews | by Harsha Vanukuri | Medium](https://medium.com/@harshav.vanukuri/a-complete-guide-to-crew-ai-and-agentic-frameworks-unleashing-the-power-of-autonomous-ai-crews-9911f39110f5#:~:text=For%20example%2C%20imagine%20a%20research,sequence%20or%20even%20in%20parallel)) - Moveworks Blog – AI agents vs agentic AI ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=When%20multiple%20AI%20agents%20work,to%20solve%20customer%20issues%20efficiently)) ([Agentic AI Vs AI Agents: 5 Differences and Why They Matter | Moveworks](https://www.moveworks.com/us/en/resources/blog/agentic-ai-vs-ai-agents-definitions-and-differences#:~:text=Agentic%20AI%20refers%20to%20artificial,adapting%20capabilities%2C%20and%20advanced%20reasoning)) - AWS Blog – *Build agentic systems with CrewAI* (enterprise adoption stats, agent definition) ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/#:~:text=The%20enterprise%20AI%20landscape%20is,transformative%20potential%20of%20these%20technologies)) ([Build agentic systems with CrewAI and Amazon Bedrock | AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/#:~:text=An%20AI%20agent%20is%20an,repetitive%20tasks%20across%20diverse%20sectors)) - Stack Overflow Blog – agentic AI in developer workflows (reducing hallucinations, transforming processes) ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=AI%20agents%20are%20generally%20better,%E2%80%9D)) ([Wait, what is agentic AI? - Stack Overflow](https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/#:~:text=Another%20set%20of%20use%20cases,specified%20by%20the%20human%20user)) - Salesforce News – Agentic AI in Education (Agentforce examples) ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=By%20equipping%20education%20organizations%20with,and%20their%20Agentforce%20solutions%20include)) ([Salesforce Advances Innovation in Education with Agentic AI - Salesforce](https://www.salesforce.com/news/stories/agents-for-impact-education-cohort-2025/#:~:text=%29%20childrenfirstfund,high%20school%20students%20for%20post)) - UPCEA – *Setting context for Agentic AI in Higher Ed* (Sam Altman’s AI levels) ([Setting a Context for Agentic AI in Higher Ed - UPCEA](https://upcea.edu/setting-a-context-for-agentic-ai-in-higher-ed/#:~:text=,ASI)) ([Setting a Context for Agentic AI in Higher Ed - UPCEA](https://upcea.edu/setting-a-context-for-agentic-ai-in-higher-ed/#:~:text=On%20January%2023%2C%202025%2C%C2%A0OpenAI%20released,broadly%20the%20collective%20knowledge%20and))