🔬 Research Summary by Dr. Qinghua Lu, the team leader of the Responsible AI science team at CSIRO’s Data61 and is the winner of the 2023 APAC Women in AI Trailblazer Award.
[Original paper by Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Stefan Harrer, and Jon Whittle]
Overview: Foundation models, such as large language models (LLMs), have been widely recognized as transformative AI technologies due to their capabilities to understand and generate content, including plans with reasoning capabilities. Foundation model-based agents derive their autonomy from the capabilities of foundation models, which enable them to autonomously break down a given goal into manageable tasks and orchestrate task execution to meet the goal. This paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model-based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Introduction
Foundation models (FMs), such as large language models (LLMs), have been widely recognized as transformative generative artificial intelligence (AI) technologies due to their remarkable capabilities to understand and generate content. Recently, a rapidly growing interest has been in the development of FM-based autonomous agents, such as Auto-GPT and BabyAGI. With autonomous agents, users only need to provide a high-level goal rather than providing explicit step-by-step instructions. These agents derive their autonomy from the capabilities of FMs, enabling them to autonomously break down the given goal into manageable tasks and orchestrate task execution to fulfill the goal. Nevertheless, the architectural design of the agents has not yet been systematically explored. Many reusable solutions have been proposed to address the diverse challenges of designing FM-based agents, which motivates the design of a reference architecture for FM-based agents. Therefore, we have performed a systematic literature review on FM-based agents. A collection of architectural components and patterns have been identified to address different challenges of agent design. This paper presents a pattern-oriented reference architecture that provides architecture design guidance for designing FM-based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Key Insights
We provide an architectural overview of an agent-based ecosystem. Users define high-level goals for the agents to achieve. The agents can be categorized into two types: agent-as-a-coordinator and agent-as-a-worker. Agents in the coordinator role primarily formulate high-level strategies and orchestrate the execution of tasks by delegating task execution responsibilities to other agents, external tools, or non-agent systems. On the other hand, agents in the worker role need to generate strategies and execute specific tasks in line with those strategies. To complete these tasks, agents in the worker role may need to cooperate or compete with other agents or call external tools or non-agent AI/non-AI systems.
Interaction engineering comprises two components: context engineering and prompt/response engineering. Context engineering is designed to collect and structure the agent’s context to understand the user’s goals. In contrast, prompt/response engineering generates prompts/responses, enabling the FM-based agents to successfully achieve the human’s goals.
Two patterns can be applied for comprehending and shaping the goals: passive goal creator and proactive goal creator. Passive goal creator analyses the user’s articulated goals, as described through text prompts submitted by the user via the dialogue interface. Conversely, a proactive goal creator goes beyond the explicit user text prompt and anticipates the user’s goals by understanding the user interface (UI) of relevant tools and human interaction.
The agent’s memory stores current context information, historical data, and knowledge accumulated over time to inform planning and actions. The memory is structured using short-term memory and long-term memory. Short-term memory refers to the information within the context window of the FM, which is in context and can be accessed by the FM during inference. To enhance the storage capacity of memory, long-term memory refers to the information maintained outside the context window of the FM.
To achieve the user’s goal, the agent needs to work out strategies and make a plan accordingly. There are two design patterns for plan generation: single-path plan generator and multi-path plan generator. The single-path plan generator orchestrates the generation of intermediate steps to achieve the user’s goal. Each step is designed to have only one subsequent step. On the other hand, a multi-path plan generator allows multiple choices at each step. Each intermediate step may lead to multiple subsequent steps.
Once the plan is determined, the role of the execution engine is to put the plan into action. The task executor is responsible for performing the tasks outlined in the plan. Within the task executor, a task monitor is necessary to monitor the task’s execution status and manage the tasks queued for execution. The tool/agent selector can search in the tool/agent registry/marketplace or on the web to find the relevant tools and agents to complete the tasks. An FM-based ranker can be applied to analyze the performance of the tools/agents and identify the best ones. The tool/agent generator can automatically create tools and agents based on natural language requirements.
A set of patterns can be adopted as plugins to ensure responsible AI. A continuous risk assessor continuously monitors and assesses AI risk metrics to prevent the agent’s misuse and to ensure the agent’s trustworthiness. A black box recorder records the runtime data, which can then be shared with relevant stakeholders to enable transparency and accountability. All these data must be kept as evidence with the timestamp and location data, e.g., using a blockchain-based immutable log. A human or AI verifier is responsible for ensuring whether the final or intermediate outputs meet the specified requirements, such as topic or trustworthiness requirements. A specific type of monitoring is called guardrails, a layer between FMs or fine-tuned FMs and other components or systems. Guardrails can be built on an RAI knowledge base, narrow models, or FMs. The RAI FMs can be fine-tuned or call upon a knowledge base to support RAI controls. The explainer’s role is to articulate the agent’s roles, capabilities, limitations, the rationale behind its intermediate or final outputs, and ethical or legal implications. The external systems, including tools, agents, and FMs, can be associated with an AI Bill Of Materials (AIBOM) that records their supply chain details, including AI risk metrics or verifiable responsible AI credentials. As there will be more rapid releases about fine-tuned FMs, there is a trend that multiple FM variations co-exist and are serviced at the same time. The co-versioning registry can be applied to co-version the AI components, such as FMs and fine-tuned FMs.
Between the lines
The contribution of this research project includes a reference architecture for FM-based agents that can be used as a template to guide the architecture design and a collection of architectural patterns that can be utilized in the architecture design of FM-based agents to ensure trustworthiness and address responsible AI-related software quantities.
This reference architecture functions as an architecture design template and enables responsible-AI-by-design. We evaluate the correctness and utility of our proposed reference architecture by mapping it to the architecture of two existing real-world agents. In future work, we plan to develop decision models for selecting patterns to assist FM-based agents’ design further.