Ofcom Fines Itai Tech £55,000 for Age Verification Violation

Deepfake 'nudify' site fined £55,000 over lack of age checks

London, November 20, 2025
Ofcom has fined UK-based Itai Tech Ltd £55,000 for failing to enforce legally required age verification on its AI-driven \”nudify\” deepfake website, which digitally produces fake nude images by removing clothing. This regulatory action exposes serious gaps in safeguarding children online from harmful AI-generated explicit content.

Details of the Fine and Company Response
The £55,000 penalty comprises £50,000 for inadequate age assurance measures and £5,000 for the company’s unsuccessful cooperation during Ofcom’s investigation, conducted under the UK Online Safety Act. Shortly after inquiries began, Itai Tech Ltd blocked access to its platform for UK users and applied to dissolve itself from the UK company register, suggesting attempts to avoid regulatory enforcement.

Regulatory Context of the Online Safety Act
In effect from July 2025, the Online Safety Act mandates firms hosting pornographic or adult-oriented online content to implement strong age verification to prevent access by minors. This enforcement represents Ofcom’s second fine under the legislation, following its penalty against 4chan for failures to restrict illegal content. Ofcom’s role as the UK internet regulator includes monitoring compliance and taking decisive action against violations to protect vulnerable users.

Wider Industry Impact and Ongoing Investigations
The investigation of Itai Tech Ltd is part of a broader crackdown, with Ofcom probing 76 additional websites and apps suspected of similar breaches related to AI-generated adult content. This signals a heightened regulatory scrutiny on the adult content industry’s use of AI deepfake technologies, emphasizing the importance of effective child protection mechanisms online.

Global Implications and Comparisons
Similar regulatory pressures are emerging worldwide. Australia’s eSafety Commissioner has threatened fines approaching A$50 million against comparable deepfake services facilitating the creation of explicit AI-generated images involving minors. These international efforts highlight the growing recognition of deepfake technology as a risk factor for child exploitation and the need for robust global safeguards.

Emerging Legal Developments and Future Challenges
In multiple jurisdictions, lawmakers are pursuing criminalization of non-consensual creation and dissemination of AI-generated intimate images, with some regions proposing custodial sentences to deter offenders. The Itai Tech case underscores the intensifying focus on regulating AI-generated content to uphold safety, privacy, and legal standards in the digital era.

This enforcement underscores the urgent necessity for companies operating adult and AI-based content platforms to rigorously comply with age verification laws and cooperate fully with regulatory bodies, as the legal landscape tightens around online safety and AI ethics globally.