🔬 Research Summary by Dr Qinghua Lu, the team leader of the responsible AI science team at CSIRO’s Data61.
[Original paper by Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Jon Whittle]
Overview: Incorporating foundation models into software products like ChatGPT raises significant concerns regarding responsible AI. This is primarily due to their opaque nature and rapid advances. Moreover, as foundation models’ capabilities are growing fast, there is a possibility that they may absorb other components of AI systems eventually, which introduces challenges in architecture design, such as shifting boundaries and evolving interfaces. This paper proposes a responsible-AI-by-design reference architecture to tackle these challenges for designing foundation model-based AI systems.
How can we effectively leverage the potential of foundation models, such as LLM, while managing risks and mitigating potential harms in their development?
The opaque nature of foundation models and their rapid advancements, like GPT-4, have raised concerns about responsible AI. Moreover, as foundation models become more capable, they may absorb other components of AI systems, presenting challenges in architecture design related to boundary shifts and interface evolution. To address these issues, it is crucial to establish concrete system-level guidance for designing AI systems based on foundation models. This paper explores the architectural evolution of AI systems in the foundation model era and emphasizes the essential quality attributes required for the responsible design of foundation model-based AI systems. It identifies key decision points during the design process and proposes a pattern-oriented reference architecture that offers a template for designing responsible foundation model-based AI systems.
The architecture evolution of AI systems can be categorized into three stages:
· Current architecture: The current architecture involves the coexistence of AI models and non-AI components within the architecture. The AI models process data and make inferences, while the non-AI components handle tasks such as user interfaces, data storage, and system interactions.
· Near-future architecture: In the near-future architecture, the foundation model connects external components, including small AI models and non-AI components. As a connector, the foundation model can provide communication, coordinator, conversion, and facilitation services. Prompt engineering plays a crucial role during the early version, guiding the foundation model to generate high-quality responses. However, prompt engineering gets absorbed into the foundation model over time and eventually disappears.
· Future architecture: There are two potential alternatives for future architecture. The first alternative is a modularized architecture, such as Socratic Models, which relies on a chain of foundation models and a limited number of AI and non-AI components. These foundation models, including large language models (LLMs), visual, and audio LLMs, interact through multimodal interactions to perform task-specific outputs. The second alternative is a monolithic architecture with a single, ultra-large foundation model capable of performing various tasks by incorporating different types of sensor data for cross-training. Examples of this type of architecture include PaLM-E, which handles language, visual language, and reasoning tasks. External components, including prompt components, are not required in this architecture.
Software quality attributes such as adaptability and modifiability in the evolving architecture should be considered to ensure adaptability and modifiability. These attributes impact the long-term maintainability of the system. Conventional software system patterns, such as microkernel and adapter patterns, can be applied to address the challenges of shifting boundaries and interface evolution in foundation model-based AI systems.
A pattern-oriented reference architecture is proposed for designing responsible and adaptable foundation model-based AI systems. The architecture comprises three layers:
· Supply Chain Layer: The supply chain layer encompasses all the components involved in the development and acquisition. When third-party components are procured, their supply chain details, including responsible AI metrics or verifiable RAI credentials, can be associated with a bill of materials (BOM). This procurement information can be stored in an RAI BOM registry. To protect the privacy of sensitive data, privacy-preserving techniques, like differential privacy can be employed. To ensure traceability, the co-versioning registry pattern can be utilized to track the versions of AI artifacts, such as foundation, fine-tuned, or distilled small models.
· System Layer: The system layer comprises both AI and non-AI components. The foundation model serves as a critical component of the AI system. While using a foundation model from a large technology company can be cost-effective, it may raise concerns regarding reliability, ethics, and data privacy. To address these issues, the fine-tuned foundation model allows for local retraining of the foundation model with domain-specific knowledge, though responsible AI challenges may persist. The sovereign foundation model ensures full ownership and responsible AI but requires significant investment. As foundation models, adaptability and modifiability gradually absorb the components can be ensured by applying the microkernel pattern to isolate changes to specific components and the adapter pattern to convert component connectors into specific interfaces when absorbed by a foundation model. To prevent harmful dual-use and shared responsibility in the AI supply chain, developers should provide foundation models via APIs to impose usage restrictions and prevent unauthorized bypassing of restrictions through reverse engineering or system modification. The domain-specific knowledge base can fine-tune the foundation model, provide additional knowledge data for inference, and verify/validate/explain responses. The knowledge base can consist of internal business data or external domain data. The prompt patterns and vector databases are commonly used to enhance input quality for the foundation model and guide its responses.
· Operation Infrastructure Layer: Verifier-in-the-loop is particularly valuable when ensuring accuracy and trustworthiness are paramount. A verifier is responsible for verifying or modifying the responses generated by the foundation model or providing feedback to agree or disagree with those responses. The verifier can be a human, such as a domain expert or user, a verification mechanism based on knowledge data, or even another AI system. An RAI black box can be used to ensure accountability and auditability of AI systems based on foundation models. The RAI black box enables retrospective analysis of accountability following near misses and incidents by recording critical data in an immutable data ledger, such as a blockchain. This includes information such as the input and output of foundation and small AI models, the versions of foundation models and small (distilled) AI models, and more.
Between the lines
This paper presents responsible-AI-by-design reference architecture to address the challenges of responsible AI and architecture evolution in foundation model-based AI systems. To provide concrete guidance on architecture design, we are working on a taxonomy of foundation model-based systems to capture the key characteristics of foundation models and the major design decisions for the architecture of foundation model-based systems.