AI in action: WRC guidelines

18 December 2025 5 min read

By Roisin O'Brien, Naomi Pollock and Shannon Madden

At a glance

  • Artificial Intelligence (AI) usage in workplace disputes is increasing, a trend also noted by the Workplace Relations Commission (WRC).
  • The recent case of Oliveira v. Ryanair DAC is understood to be the first WRC decision where AI use was expressly identified by the Adjudicator.
  • Shortly after, the WRC issued guidance on responsible AI use for preparing written submissions in WRC proceedings.
  • The guidance highlights AI’s benefits, especially for lay litigants, but warns against risks from inaccurate or unverified outputs.
  • It recommends stating in submissions if AI was used in their preparation and sets out practical steps to mitigate risks.

Background

Oliveira v. Ryanair DAC 

The Claimant, a former Ryanair DAC cabin crew member, brought an EUR170,000 discrimination claim against his employer, alleging discrimination on grounds of family status and race in relation to promotion, training, and employment conditions. He further claimed victimisation, harassment, and sexual harassment, citing false allegations against him by his supervisor, including racist remarks, misconduct accusations, and threats. The Claimant contends these allegations were disproven but resulted in career blockage and coordinated retaliation against him. 

The Respondent raised concerns that the Claimant’s submissions may have been generated with the assistance of artificial intelligence. While the Claimant initially denied this suggestion as 'baseless', he later admitted during the second day of the hearing that AI may have been used in preparing his submissions and became defensive about this use. 

The Claimant’s submissions relied on citations that were irrelevant, inaccurately quoted, and in some instances entirely fictitious. The Adjudication Officer was not concerned about the AI use itself. However, she emphasised that both the WRC and the employer had spent significant time looking for 'phantom citations' with no connection to Irish employment law. The facts of several real cases were also misrepresented. The Adjudication Officer highlighted the responsibility of the parties to bring relevant and accurate submissions before the WRC and described the attempt to rely on phantom citations as 'egregious and an abuse of process'. The complaint was ultimately dismissed for separate reasons, due to a failure to evidence that the alleged discriminatory behaviour had occurred.  

The case illustrates the practical risks posed by unverified AI-generated material. Fictitious or inaccurate citations can mislead the opposing party, delay proceedings and complicate the Adjudication Officer's role.

Best practices for parties using AI tools

The guidance outlines several practical steps for litigants when using AI in their WRC submissions:

Ensure full understanding of submissions

Parties to WRC proceedings are responsible for the accuracy of their own submissions.

Litigants must understand the contents of the final document they submit to the WRC and that they are able to explain these submissions if questioned or cross-examined. Litigants should be careful to ensure that the use of AI does not obscure the factual basis of the claim or introduce unnecessary complexity. 

Data protection and confidentiality

The guidance emphasises the data protection concerns associated with uploading sensitive information to publicly accessible AI tools, which may retain or reuse personal and commercially sensitive data. It advises litigants to exercise caution and avoid including details such as names, PPS numbers and health information when using these platforms.

Limit AI use for strategy

Litigants are advised not to rely on AI for legal strategy or predicting the outcome of their case, as AI cannot assess the specific strengths and weaknesses of a claim or apply the nuanced judgment required in legal decision-making. Strategic choices should always be grounded in informed legal advice and professional expertise, supported by a full understanding of the facts and applicable law.

Optional disclosure of AI use

Additionally, the guidance encourages a degree of transparency when using AI tools. Parties who have used AI tools in the preparation of their submissions may indicate this by a statement within the document. Although optional, such disclosure allows adjudicators to assess the content with an appreciation of its origin and potential limitations. This aligns with international trends, as several jurisdictions have adopted similar expectations in judicial and quasi-judicial contexts. The WRC’s approach highlights that artificial intelligence may be used as a supportive tool, provided its limitations are recognised and its role is openly acknowledged.

The broader context: Judicial caution in the Irish legal system

The guidance should also be understood within the wider context of the Irish legal system’s engagement with artificial intelligence. Recent commentary notes that judges may use artificial intelligence for certain administrative or organisational tasks but not for legal reasoning or any function capable of influencing substantive decision making. This mirrors the WRC’s approach and indicates a growing understanding that AI may increase efficiency in limited ways, but the core functions of legal interpretation, analysis, and judgment must remain with human decision makers. Across the Irish legal landscape, it appears that institutions are embracing the usefulness of AI while recognising the importance of human oversight. For example, the Law Society of Ireland has issued a range of resources on the ethical and responsible use of AI and on managing its risks, while the Bar of Ireland has published an ethical toolkit to guide the use of artificial intelligence in legal practice.

Key takeaways

AI is a useful tool, but it does not bear responsibility for inaccurate legal submissions – that sits with the parties.

The Oliveira case demonstrates the real risks associated with unverified AI-generated content, including fictitious citations and misleading arguments that can delay proceedings and undermine credibility.

All AI-generated material should be thoroughly checked for accuracy and relevance before submission. Finally, AI tools should be used responsibly and transparently, with their role clearly acknowledged to ensure they support, rather than undermine, the integrity of the adjudication process.

DLA Piper resources

DLA Piper has developed a comprehensive suite of AI resources to support our clients, including the following:

  • DLA Piper AI and Employment Podcast: A series exploring key employment law issues arising from AI, featuring insights and recommendations from DLA Piper’s global team. This podcast can be accessed here.
  • AI Laws of the World: An overview of AI laws and proposed regulations in over 40 countries, highlighting significant legislative developments, regulatory proposals, and official guidelines. This resource can be accessed here.
  • DLA Piper's AI Academy: A global programme designed to help organisations navigate the risks and opportunities of AI through practical workshops, expert insights, and real-world case studies, supporting businesses at every stage of their AI journey. Further information can be found here.

For further details on these resources, please get in touch with our team.