
✍️ By Sun Gyoo Kang1
Disclaimer: The views expressed in this article are solely my own and do not reflect my employer’s opinions, beliefs, or positions. Any opinions or information in this article are based on my experiences and perspectives. Readers are encouraged to form their own opinions and seek additional information as needed.
Editor’s Note: ISED’s new Implementation guide for managers of Artificial intelligence systems offers practical governance strategies despite Canada’s stalled AI legislation. The Guide, complementing ISED’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, provides actionable frameworks across five key principles: Safety (comprehensive risk assessment), Accountability (robust policies and procedures), Human Oversight & Monitoring (preventing autonomous operation), Transparency (clear AI identification), and Validity & Robustness (ensuring reliable performance across conditions). While the absence of binding regulations like Bill C-27 leaves significant gaps, the Guide serves as a valuable educational resource with international alignment, detailed best practices, and a repository of standards that may function as a de facto benchmark for responsible AI management in Canada’s evolving regulatory landscape.
Introduction
Think about your workday. Chances are, generative Artificial Intelligence (“Generative AI” or “Gen AI”) is already playing a role, even if you don’t realize it. From smarter hiring processes2 to chatbots that handle customer questions,3 Artificial Intelligence (“AI”) is becoming deeply ingrained in modern organizations. But with this power comes responsibility. How do we ensure these complex AI systems are safe, trustworthy, and, ultimately, good for everyone?
Canada was among the first to grapple with these questions, initially pushing for AI regulation with Bill C-27, the Artificial Intelligence and Data Act (“AIDA”).4 That particular bill might be dead or on hold, but the conversation about responsible AI hasn’t stopped. In fact, remember that Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems5 (“Code”) by Innovation, Science and Economic Development Canada (“ISED”) released for Gen AI developers and managers? It’s still very much relevant, and ISED just published the Implementation guide for managers of Artificial intelligence systems6 (“Guide”) which offers valuable guidance for managers even in the absence of AIDA.
The Guide complements the ISED Code and is specifically tailored for AI system managers. It intends to help them better understand what is expected of them. This column is a practical summary of the Guide for managers overseeing AI systems. It will explore key principles and actionable steps outlined in resources like implementation guides for AI to help organizations build a robust framework for AI governance and navigate the intricacies of the AI system lifecycle.
Scope, Principles and Procurement Management
According to ISED’s Code, a person managing AI system operations, including putting a system into operation, controlling its parameters, controlling access, and monitoring its operation, is a manager. A person developing, including methodology selection, collection and processing of datasets, model building, and testing, is a developer.
Although the Code focuses on generative AI, the Guide clarifies that it applies to AI system managers broadly, not just generative AI. The AI system lifecycle part of the guide explains that AI systems are complex technological systems made up of components like models and user interfaces. Their creation and operation involve various actors, such as those involved in data collection, model development, system validation, and post-deployment management. The deployment method, for example, through an API, can also impact the AI value chain and the roles within it.
The Code outlines six principles: Safety, Accountability, Transparency, Fairness & Equity, Human Oversight & Monitoring, and Validity & Robustness. While the Guide provides best practices for AI managers on most principles, it excludes Fairness & Equity, as the Code applies that obligation only to developers—not managers. This significantly reduces the burden on managers, especially given the lack of AI legislation in Canada and the fact that major AI model developers (e.g., OpenAI, Anthropic, Meta, Google) have not signed the Code (as of March 9, 2025). In practice, the only way for managers to ensure developers comply is through contractual agreements—an approach that is often challenging to enforce.
Nevertheless, the Guide reaffirms that a good procurement program based on a good risk management practice is key to success for AI system managers. Managers should establish these processes as a foundational step for responsible AI management.
Key considerations for good AI procurement management include:
- Developing standardized evaluation criteria that assess technical capabilities, ethical considerations, and alignment with organizational values on vendors.
- Requiring comprehensive vendor documentation on development, testing, and limitations.
- Creating cross-functional procurement committees with diverse stakeholders.
- Ensuring transparency from vendors regarding model architecture, training data, and performance.
- Evaluating vendors’ track record on responsible AI practices through formal due diligence.
- Addressing fairness and equity concerns by including relevant metrics and bias testing in vendor evaluation.
- Requiring vendors to demonstrate compliance with relevant regulations and standards.
By implementing these steps, managers can build a strong procurement process that supports responsible AI governance.
Safety
The Guide emphasizes that understanding the risks that can arise in an organization’s operational context is critical to the successful management of an AI system. The safety risks associated with AI systems can vary depending on their context of use and the actors managing those risks.
To mitigate risks and promote the safe use of AI systems, the Code recommends that managers of AI systems should perform a comprehensive assessment of reasonably foreseeable potential adverse impacts, including risks associated with inappropriate or malicious use of the system.
The Guide provides the following steps that managers may consider to implement this measure:
- Identify and assess risks that may arise from operating the system, including risks from intended uses, unintended but reasonably foreseeable uses, misuses, malicious uses, and other operational risks. These risks should be categorized according to their likelihood, who or what may be affected, and the severity of impacts, including their magnitude and reach. This assessment should be regularly reviewed and updated.
- Consider a range of potential risks, including bias, data protection and privacy risks, risks arising from using the system for misinformation or other malicious purposes, cyber security, compliance risks, and reputational risks.
- Diverse internal stakeholders (including human resources, information technology, legal, compliance, product, customer service, and business units) should be involved in risk assessment processes to ensure multiple organizational perspectives are considered.
- Identify and assess how fundamental rights may be negatively impacted by the operation of the AI system.
- Identify and assess how vulnerable groups (such as children, the elderly, or historically marginalized groups) may be negatively impacted by the operation of the AI system.
- Develop detailed impact scenarios across different user groups and use cases.
- Conduct structured workshops with diverse stakeholders to identify potential impacts, including potential second and third-order effects of system deployment.
- Implement regular horizon scanning for emerging risks and threat vectors, such as malicious attacks.
- Conduct testing to identify vulnerabilities in the AI system and its deployment environment, including adversarial testing and regular testing for malfunctions.
Implementing these best practices can help managers ensure the safe operation of AI systems within their organizations.
Accountability
The Guide states that after identifying potential risks, it is equally important for organizations to set in place policies and procedures to address those risks. This includes ensuring that employees tasked with maintaining the system, responding to incidents, engaging with end users, and monitoring operations are aware of this information. Establishing practices, policies, and procedures helps ensure that organizations and employees understand their responsibilities and can respond appropriately to incidents. Strong AI literacy across all levels of the organization is foundational to responsible AI governance and enables better risk management.
To establish these norms, it is important to establish and maintain a risk management framework (e.g., NIST’s AI Risk Management Framework). The Code recommends that managers of AI systems implement a comprehensive framework proportionate to the nature and risk profile of activities. This includes establishing policies, procedures, and training to ensure staff are familiar with their duties and the organization’s risk management practices.
The Guide provides the following steps that managers may consider to implement this measure:
- Develop and maintain a risk management framework that explains how identified risks are being mitigated (by the manager or by others in the value chain), with whom decision-making authorities lie, and expected response timelines to address risks.
- Set in place a policy identifying when to deactivate or cease the operations of systems, as well as a procedure for decommissioning systems in a manner that mitigates risk.
- Set policies for staff, including training, to socialize organizational expectations, procedures, and authorities if an incident occurs. This training should be regularly updated to reflect the evolving nature of AI risks and best practices.
- Provide role-specific training and upskilling opportunities. This could include general training on responsible generative AI use for all employees and specialized instruction for technical teams on AI development, deployment, and maintenance.
- Implement version control for the AI system and its components, and establish a formal change management process to track and assess the impact of updates and modifications.
- Maintain a centralized repository of all AI system documentation, including risk assessments, incident reports, system modifications, user feedback, and performance metrics, with an appropriate retention period.
- Provide clear user guidance, including acceptable use policies that outline appropriate system usage, prohibited activities, user responsibilities, and potential consequences of misuse. These guidelines should be easily accessible, written in plain language, and updated regularly.
The Guide also emphasizes that the organization’s risk assessment and management frameworks will require regular review and updates to integrate new information and ensure they continue to address organizational needs. AI evolves so rapidly that the framework should also be as flexible as possible to capture current and future risks related to AI.
Furthermore, to promote a culture of accountability, the Code recommends that AI system managers share information and best practices on risk management with firms playing complementary roles in the ecosystem.
The Guide suggests the following steps for implementing this:
- Publish de-identified risk assessment findings and mitigation strategies.
- Collaborate with other organizations to develop standardized risk assessment tools.
- Contribute to industry forums and working groups on AI risk management.
Implementing these best practices can help managers foster accountability within their organizations and contribute to a more responsible AI ecosystem.
Human Oversight & Monitoring
The Guide highlights that due to their position in the AI value chain, managers are best positioned to ensure that AI systems are not operating fully autonomously and that there is a human in the loop to monitor, update, and maintain the system’s operations. This oversight also enables the quick identification and resolution of incidents, preventing minor issues from escalating.
In this context, the Code recommends that managers of AI systems monitor their operations for harmful uses or impacts after they are made available, including through third-party feedback channels. They should also inform the developers and implement usage controls as needed to mitigate harm.
The Guide provides the following steps that managers may consider to implement this measure:
- Establish ongoing monitoring and evaluation procedures for deployed AI systems. This ensures continuous assessment of the system’s performance and impact.
- Develop automated detection systems for potential harmful uses. Automation can aid in the efficient identification of problematic activities.
- Monitor the AI system’s performance across different demographic groups or relevant categories. This helps in identifying and addressing potential biases or disparate impacts.
- Monitor user behaviour regarding the system and provide a place for user feedback on their experience of it. Understanding how users interact with the system and gathering feedback is crucial for identifying issues and areas for improvement.
- Collect and analyze user feedback, incident reports, and other relevant data. This data provides valuable insights into the system’s operation and potential problems.
- Conduct regular evaluations of model performance to detect and address model drift. Over time, model drift can decrease accuracy and reliability, so regular evaluation is essential.
- Create multiple feedback channels for users and affected parties. Providing various avenues for feedback ensures that concerns can be easily raised.
- Establish regular review procedures for reported incidents. Timely review is necessary for effective resolution and prevention of future occurrences.
- Implement mechanisms to address and mitigate harmful uses or impacts. It is vital to have processes in place to respond to and reduce negative consequences.
- Maintain incident response teams with clear escalation procedures. Well-defined teams and procedures ensure efficient handling of incidents.
- Establish protocols and communication channels for informing developers about identified issues or performance concerns, including sharing relevant monitoring data. Effective communication with developers is crucial for addressing the AI system’s underlying problems.
Implementing these best practices can help managers ensure that AI systems are subject to appropriate human oversight and monitoring, contributing to their safe and responsible operation.
Transparency
The Guide emphasizes that managers are well-positioned to provide transparency regarding the system to users due to their position in the AI value chain. Robust transparency practices are important for promoting trust, enhancing user satisfaction, mitigating risks of misuse and malfunction, and ensuring the system continues to perform as intended. To enhance transparency, the Code recommends that managers of AI systems ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.
The Guide provides the following steps that managers may consider to implement this measure:
- Develop and implement standardized AI identification protocols for all interaction types (e.g., chatbots, email, and phone), including consistent disclosure notices. This will ensure that users are always aware when interacting with an AI.
- Provide free and accessible information to users about the nature and capabilities of AI systems, including information on how they are developed, operated, and maintained. This helps users understand what the AI can and cannot do.
- Consider whether user interface choices, for example, the use of personal pronouns, self-attributions of mental states, or emotions by user-facing chatbots, are required and appropriate for the use case. This encourages careful consideration of how AI is presented to users.
- Establish processes to document when content was generated by an AI system. For example, add standardized tags to AI outputs when they are stored or distributed. This will make it clear when content is AI-generated.
Furthermore, the Guide highlights that while it is crucial to maintain transparency regarding when a product or service using AI could be mistaken for a human, it is equally important to ensure that end users know when AI is being used to shape their experience and how the AI system is contributing to their experience. This promotes user choice and understanding of their interactions with AI. It is also recommended to provide transparency regarding the capabilities, risks, and limitations of the system, and the manager’s expectations for how users can use the system, as well as what is considered by the company to be misused.
Implementing these best practices can help managers foster transparency in their AI systems, which can lead to greater trust and understanding among users.
Validity & Robustness
The Guide explains that an AI system’s performance is valid when it performs as intended for its intended uses. A system is robust when it performs as intended across many different scenarios, including diverse or unusual scenarios.
Therefore, validity and robustness refer to the optimal and reliable performance of the system under various conditions. To ensure AI systems perform optimally and reliably, managers should consider testing their system’s performance against diverse real-world inputs and under adverse or challenging conditions, retesting after significant updates, identifying and documenting the system’s limitations, and verifying critical outputs—especially in high-stakes applications where errors could have significant consequences.
While the measures discussed previously are recommended for managers of both public-facing and non-public-facing AI systems, the Code additionally recommends that managers of public-facing AI systems take further steps to protect the validity and robustness of the system’s operations by performing an assessment of cyber-security risk and implementing proportionate measures to mitigate risks, including data poisoning.
The Guide provides the following steps that managers of public-facing AI systems may consider to implement this measure:
- Implement comprehensive security testing protocols.
- Create automated security scanning tools and procedures.
- Establish regular security audit procedures.
- Develop incident response plans for security breaches.
- Maintain security monitoring systems for early threat detection.
- Adopt general cybersecurity best practices.
By implementing these best practices, particularly the focus on testing under diverse conditions and robust cybersecurity measures for public-facing systems, managers can work to ensure the validity and robustness of their AI systems.
Resources
The Guide includes a Repository of relevant resources for AI system managers. This section is a starting point for organizations seeking information on responsible AI governance. The Guide lists a subset of resources that may be of particular interest to AI system managers who want to implement the measures found in the Code.
- International Standards Organization (ISO): ISO/IEC 42001:2023 – AI management system
- Digital Governance Standards Institute CAN/DGSI 101 – Ethical Design and Use of Artificial Intelligence by Small and Medium Organizations (2025)7
- National Institute for Standards and Technology (NIST): NIST AI Risk Management Framework (2023)8
- National Institute for Standards and Technology (NIST): NIST AI 600-1 AI RMF Generative AI Profile (2024)9
- EU AI Office: Living Repository of AI Literacy Practices10 (Living database).
- EU AI Office: General-Purpose AI Code of Practice11 (2025 Draft)
- Organisation for Economic Co-operation and Development (OECD): OECD AI Incidents Monitor (AIM)12 (Living database).
- Organisation for Economic Co-operation and Development (OECD): Catalogue of Tools & Metrics for Trustworthy AI13 (Living database).
- Organisation for Economic Co-operation and Development (OECD): Framework for the Classification of AI systems14 (2022).
- Massachusetts Institute of Technology (MIT): MIT AI Risk Repository15 (Living database)
- AI Standards Hub: Standards Database16 (Living Database)
- UK Department for Science, Innovation and Technology’s (DSIT): AI Management Essentials (AIME) tool17 (2024 Draft)
Conclusion
In conclusion, while the Guide offers a valuable collection of best practices for navigating the complex landscape of AI governance, its voluntary nature cannot be ignored, especially in light of the demise of Bill C-27 AIDA. The Guide rightly points out that it is “not a checklist or a rigid set of steps,” encouraging organizations to tailor their approach based on their specific context. This flexibility is a strength, allowing for practical implementation across diverse company profiles and use cases. Furthermore, the Guide’s alignment with international initiatives like the G7 Hiroshima Process highlights a global push for responsible AI.
However, the absence of legally binding obligations, especially with the dissolution of Canada’s proposed AI legislation, leaves a significant gap. While the Guide urges managers to familiarize themselves with existing laws related to privacy, competition, and consumer protection, it cannot replace the specific legal framework that AIDA might have provided. The Guide serves as a crucial educational tool, offering “more granular advice and suggestions” aligned with the Code and emphasizing proactive risk management, accountability, transparency, human oversight, and the validity and robustness of AI systems. The inclusion of a repository of relevant resources further empowers managers to seek out best practices and standards.
Despite the lack of legal teeth, this Guide offers a beacon of hope. It provides actionable steps for managers genuinely committed to responsible AI development and deployment. It underscores that responsible AI governance begins with foundational steps like establishing a clear vision and conducting due diligence. The detailed best practices for safety, accountability, human oversight, transparency, and validity & robustness offer concrete pathways for organizations to mitigate risks and build trustworthy AI systems.
This brings us to a critical juncture. In the absence of dedicated AI legislation in Canada, can the Code, supported by this Guide, effectively serve as a temporary, de facto standard for responsible AI management? Will organizations driven by ethical considerations, reputational risks, or potential future regulations adopt and adhere to these guidelines?
Footnotes
- Law and Ethics in Tech Law and Ethics in Tech | Medium ↩︎
- Charlotte Lytton, AI hiring tools may be filtering out the best job applicants, BBC Worklife (Feb. 14, 2024), https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination (Last visited Mar. 8, 2025) ↩︎
- Hannah Wren, AI in Customer Service: All you need to Know, Zendesk (Oct. 10, 2024), https://www.zendesk.com/blog/ai-customer-service/ (Last visited Mar. 9, 2025) ↩︎
- Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess., 44th Parl., 2022 (Can.) ↩︎
- Innovation, Sci. & Econ. Dev. Can., Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2023), https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems (Last visited Mar. 9, 2025) ↩︎
- Innovation, Sci. & Econ. Dev. Can., Implementation Guide for Managers of Artificial Intelligence Systems (2025), https://ised-isde.canada.ca/site/ised/en/implementation-guide-managers-artificial-intelligence-systems.(Last visited Mar. 7, 2025) ↩︎
- Digital Governance Council, Ethical Design and Use of Artificial Intelligence by Small and Medium Organizations – CAN/DGSI 101, (2025), https://dgc-cgn.org/standards/find-a-standard/standards-in-automated-decision-systems-ai/cisoc101/ (Last visited Mar. 9, 2025) ↩︎
- National Institute of Standards and Technology, AI Risk Management Framework (Jan. 2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (Last visited Mar. 9, 2025) ↩︎
- National Institute of Standards and Technology, Artificial Intelligence Risk Management
Framework: Generative Artificial Intelligence Profile (Jul. 2024), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf (Last visited Mar. 9, 2025) ↩︎ - European Commission, Living Repository to foster learning and exchange on AI literacy (Feb. 4, 2025) https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy (Last visited Mar. 9, 2025) ↩︎
- European Commission, General-Purpose AI Code of Practice, https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice (Last visited Mar. 9, 2025). ↩︎
- Organisation for Economic Co-operation and Development, OECD AI Incidents Monitor (last visited Mar. 9, 2025). ↩︎
- Organisation for Economic Co-operation and Development, Catalogue of Tools & Metrics for Trustworthy AI, https://oecd.ai/en/catalogue/overview (last visited Mar. 9, 2025). ↩︎
- Organisation for Economic Co-operation and Development, OECD Framework for the Classification of AI systems (Feb. 22, 2022), https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html. ↩︎
- MIT AI Risk Initiative, AI Risk Repository, https://airisk.mit.edu/ (last visited Mar. 9, 2025). ↩︎
- AI Standards Hub, Standards Database, https://aistandardshub.org/ai-standards-search/ (last visited Mar. 9, 2025). ↩︎
- HM Government, Guidance for using the AI management essentials tool, https://www.gov.uk/government/consultations/ai-management-essentials-tool/guidance-for-using-the-ai-management-essentials-tool (last visited Mar. 9, 2025). ↩︎