AI in employment practice: Navigating risks and ensuring compliance

16 March 2026 4 min read

By Olufunmilola Oyinkansola Binuyo and Caleb Nmeribe

At a glance

  • AI is being increasingly used in Nigerian workplaces for recruitment, performance management, and employee monitoring.
  • Key risks include bias, opaque decision-making, privacy breaches, and potential discrimination claims.
  • Employers are responsible for AI-driven decisions and must ensure fairness and transparency.
  • Safeguards such as risk assessments, human oversight, and governance frameworks help mitigate legal risks.

Background

AI is rapidly reshaping Nigerian workplaces. From recruitment and performance management to employee monitoring, AI-powered tools are being integrated into HR practices to speed up processes and improve decision-making. However, the use of AI comes with legal and ethical challenges that current Nigerian employment laws are yet to fully address. This article explores the legal framework governing AI use, contemporary trends, key risks including algorithmic bias and privacy breaches. It also offers practical recommendations for curbing legal risks and safeguarding employee rights while adopting AI.

Legal framework

There is currently no primary legislation for AI use in Nigeria. This regulatory gap creates uncertainty for employers and employees alike. However, within the context of employment, two sets of laws are largely applicable:

  • The Nigerian Constitution which provides a full range of fundamental rights (for example, the right to freedom from discrimination).
  • The Nigeria Data Protection Act and its General Application and Implementation Directive 2025 introduce comprehensive rules for processing personal data particularly in the context of AI use.

Current trends and the role of AI in HR

The use of AI is no longer theoretical; it is gaining momentum in Nigerian workplaces. In view of the efficiency of AI systems, HR departments have begun adopting AI for:

  • Resume screening: AI is used to filter prospective employees based on keywords and qualifications.
  • Psychometric scoring: AI is used to evaluate personality traits and cognitive skills.
  • Video interview analysis: AI is utilised to assess facial expressions and body language of prospective employees during virtual interview sessions.
  • Performance analytics: AI tracks work patterns over time and predicts employee success or shortcomings in the workplace.

While the above uses ensure operational efficiency, they also introduce significant risks impacting workplace fairness and legal compliance. These risks will be discussed subsequently.

Key legal risks in AI-driven employment processes

Discriminatory outcomes

AI tools may discriminate against individuals based on protected characteristics including, ethnic group, place of origin, sex (gender), religion, circumstance of birth or political opinion. The Nigerian Constitution prohibits restrictions or deprivations based on these characteristics especially if other Nigerian citizens are not subjected to the same restrictions or deprivations. 

Discriminatory outcomes flowing from AI use are perfectly illustrated by Amazon’s AI hiring tool case. In 2014, Amazon developed an AI hiring tool which was intended to automate and streamline resume screening. However, this AI system exhibited gender bias because it was trained on historical male-dominated hiring data. This highlights how AI can reinforce existing social and institutional inequalities based on biases inherent in training data.

Opaque decision-making

Many AI algorithms operate as 'black boxes', providing little transparency on how their decisions are reached. Within the context of employment, this limits accountability and makes it difficult for employers to demonstrate fairness in AI use or for employees to challenge decisions that impact them negatively.

Privacy breaches

AI tools often require extensive personal data processing, including behavioural and biometric information. Processing without lawful basis or consent risks violating the Nigeria Data Protection Act and the constitutional right to privacy.​

Unfair dismissal or discrimination claims

In view of the above and other risks, AI-driven decisions can lead to discrimination in applicant engagement, wrongful termination or even denial of promotion and other benefits. Employees and applicants alike can bring claims in court to oppose these decisions.

Safeguards and compliance measures employers should implement

The absence of specific AI legislation does not absolve employers of any liability where AI is used without due care. Nigerian employers using AI for employment purposes should consider the following practical measures:

  • Conduct Algorithmic and Data Protection Impact Assessments (DPIAs): Algorithmic risk assessment involves evaluating the potential impact and risks of algorithms within an AI system before its use. It also entails ascertaining best practices to curb the risks. This would help employers understand how these systems reach decisions that impact employees. DPIAs on the other hand, involve the assessment of personal data processing activities (in this case, while using AI).
  • Document lawful basis for data processing and obtain consent when needed: Under the Nigeria Data Protection Act, personal data must be processed (used) under certain lawful bases such as consent, legitimate interest, contractual basis amongst others. It is important that where personal data of employees and candidates are processed with AI, lawful bases must be relied on and documented.
  • Vendor and product vetting: Before an AI vendor is engaged or their product is used, both must be adequately vetted to ensure that their products are not prone to risks. Objective data and client testimonials should be considered in addition to representations made by vendors.
  • Introduce human oversight: Employers should avoid fully automated decisions. Humans should review all AI recommendations, especially for hiring and termination decisions.
  • AI governance: Introduction of AI governance is important for effective management of AI-associated risks. Governance in this respect involves the establishment of structured processes, policies and oversight mechanisms to ensure that AI use aligns with the law, organisational values and ethical standards.

Conclusion

AI integration in the workplace brings about efficiency. However, this use could easily negatively impact an organisation’s bottom line and reputation if safeguards are not implemented. In the court of law or public opinion, employers bear responsibility for the impact of AI systems on employees. As such, the benefits of these systems must be balanced with obligations under law to employees. Robust compliance with law and best practice standards is not a nice-to-have, it is a must-have.

Should you require advice with respect to employment-related issues, please do not hesitate to contact us at employmentoollp@olajideoyewole.com.