
Ontario Human Rights Commission publishes Human Rights AI Impact Assessment Tool
At a glance
- The latest session of the Canadian Parliament was terminated due to prorogation, lasting until 24 March 2025, effectively terminating all bills in progress, including Bill C-27.
- Canada's federal privacy regime remains unchanged for the foreseeable future, without the anticipated modernisations from Bill C-27.
- The only artificial intelligence (AI)-related guidance at the federal level remains the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, with no formal enforcement body.
- Provinces and territories have been slow to enact AI regulations, with Ontario being the only province with specific AI legislation.
- The Ontario Human Rights Commission (OHRC) introduced the Human Rights AI Impact Assessment (HRIA) to help organisations evaluate and mitigate human rights impacts of AI systems, promoting responsible AI development and transparency.
As of the time of writing, the Canadian Parliament’s latest session was terminated because of prorogation. Prorogation is expected to last until 24 March 2025. This formal suspension of Parliament’s activities effectively terminated all bills in progress, including Bill C-27, introduced in 2021. Bill C-27 was set to replace and amend various legislation, as well as to establish Canada’s first legislative framework on AI, the Artificial Intelligence and Data Act.
This means that Canada’s federal privacy regime will remain as-is for the foreseeable future, without the modernisations and improvements that many were anticipating. Should Bill C-27 be re-introduced and picked up where it left off, it is not obvious that any new laws will be passed before the next federal election, which must take place by late October 2025 at the latest.
As a result, the only AI-related guidance at the federal level remains the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Currently, there is no tribunal, commission, court or other adjudicative body with the formal role of enforcing or administering the Voluntary Code.
At a provincial level, provinces and territories have been slow to enact legislative provisions regulating the use and development of AI-based applications. Ontario is the only province that has enacted legislation which aims to regulate AI use within its boundaries.
Ontario's AI legislation
The OHRC has introduced a new HRIA aimed at helping organisations evaluate the potential human rights impacts of their AI systems. This tool is designed to ensure that AI technologies are developed and used in ways that respect human rights and prevent discrimination. By providing a structured approach, the tool helps organisations identify and mitigate risks, promoting transparency and accountability in AI deployment. The OHRC's initiative emphasises the importance of responsible AI development, aiming to foster trust and confidence in AI technologies.
The HRIA is described as a 'guide' and states that it 'does not constitute legal advice and does not provide a definitive legal answer regarding any adverse human rights impacts, including violations of federal or provincial human rights law or other relevant legislation'. It also states that 'an organisation or individual will not be protected from liability for adverse human rights impacts, including unlawful discrimination, if they claim they complied with or relied on the HRIA.' In other words, organisations should be aware that compliance with the HRIA would not act as a complete shield.
Structure of the HRIA
The structured approach provided by the tool includes several key elements. Firstly, it offers a comprehensive framework for organisations to systematically assess the human rights impacts of their AI systems. This involves identifying potential risks and implementing measures to mitigate them. Secondly, it emphasises the importance of integrating human rights considerations throughout the entire lifecycle of an AI system, from design and development to deployment and operation. This ensures that human rights are upheld at every stage. Thirdly, the tool encourages the involvement of human rights experts and engagement with diverse communities, ensuring that the assessment is thorough and considers various perspectives and potential impacts. Additionally, the tool is designed to help organisations comply with human rights laws, both in Ontario and Canada, aligning with internationally recognised AI principles. Lastly, the tool is intended to be used alongside other AI assessment tools that address issues such as privacy, data accuracy, and procedural fairness, providing a holistic approach to addressing a wide range of human rights and legal concerns.
The HRIA is structured into two main parts: Impact and Discrimination (Part A) and Mitigation (Part B).
Part A: Impact and Discrimination includes self-assessment questions divided into five sections:
- Purpose of the AI system: Evaluates the AI system's function, objectives, beneficiaries, and potential harm.
- High risk for human rights violation: Assesses if the AI system influences decisions, uses biometric tools, tracks behaviour, or impacts historically disadvantaged groups.
- Differential treatment: Identifies if the AI system differentiates based on protected grounds and whether such treatment is permissible.
- Accommodation: Ensures the AI system is accessible and relevant to all parties, including diverse populations and children.
Results: Ranks the AI system into one of six categories based on the assessment.
Part B: Mitigation outlines strategies to address potential human rights issues:
- Internal procedures: Encourages regular human rights reviews throughout the AI lifecycle.
- Explainability, disclosure, and data quality: Focuses on transparency, data accuracy, and making AI decisions comprehensible.
- Consultations: Involves engaging impacted people in the AI system's design and use.
- Testing and review: Emphasises frequent testing and review for high-risk AI systems.
Organisations are advised to seek legal counsel before using the HRIA to ensure confidentiality, privacy, security, and legal privilege are maintained. The HRIA serves as a valuable tool for assessing the human rights impacts of AI systems and ensuring compliance with human rights law.