Regulation governing the use of artificial intelligence in Peru

At a glance

  • Supreme Decree N.º 115-2025-PCM regulates the use of artificial intelligence (AI) for Peru’s economic and social development.
  • The regulation applies to public administration, governmental companies, and the private sector, excluding personal use and national defence / security.
  • AI systems are classified by risk: Acceptable, prohibited (serious impacts on rights), and high risk (affecting life, dignity, freedom, security, or basic rights).
  • Developers of high-risk AI must maintain records, ensure human oversight, and implement security, privacy, and transparency policies.
  • The use of AI in recruitment, evaluation, hiring, or dismissal is classified as high risk and requires human oversight and algorithmic transparency.

Supreme Decree N.º 115-2025-PCM regulates the use of AI for Peru’s economic and social development. The regulation applies to public administration, governmental companies, and the private sector, excluding personal use and national defence / security. AI systems are classified by risk:

  • Prohibited: When it generates serious and irreversible impacts on fundamental rights.
  • High risk: When it affects life, dignity, freedom, security, or basic rights; such uses will only be allowed under conditions of supervision and control.
  • Acceptable: All other uses that comply with the applicable regulations)

Developers of high-risk AI must:

  • Maintain an updated record of system operation, data used, and social impacts.
  • Incorporate mechanisms of human oversight in sectors such as health, education, justice, finance, and access to basic services.
  • Establish policies on security, privacy, and transparency, in addition to ensuring accountability.
  • Provide staff training on risks associated with the use of AI.

In high-risk AI systems, there must be algorithmic transparency, which means that:

  • They must provide clear and straightforward information about the system’s purpose, use, functions, and possible decisions, as long as trade secrets are not revealed.
  • A visible labelling system can be added to inform users when a system is using AI, except for internal processes that don't impact rights or services.
  • If the results impact human rights, a detailed explanation of the factors that led to the decision must be provided.

The use of AI in recruitment is considered to be high-risk. Prior to implementation, a voluntary pre-impact analysis can be performed and if it highlights that there are risks or the possibility of wrong automated decisions are identified, then proactive steps must be taken to mitigate the negative impacts.

Companies developing or using AI must:

  • Keep records of high-risk systems.
  • Implement ethics and transparency protocols.
  • Train their employees in the safe and ethical use of AI.
  • Ensure that labor impact decisions have human oversight with the ability to stop or override results.

Implementation will take place at different times depending on the sector / type of company:

  • Health, education, justice, economy, and finance: September 2026.
  • Transport, commerce, and labor: September 2027.
  • Production, agriculture, energy, and mining: September 2028.
  • Other sectors: September 2029.
  • For SMEs and innovative startups: Two years for small enterprises and three years for microenterprises.