Today’s Solutions: January 14, 2026

BY THE OPTIMIST DAILY EDITORIAL TEAM

The UK is moving to outlaw AI-powered “nudification” apps in a sweeping effort to tackle a new wave of online abuse, particularly targeted at women and girls. Announced just before the end of 2025, the upcoming legislation will make it a criminal offense to create or distribute apps that use artificial intelligence to digitally remove clothing from images without consent.

These tools, often promoted under the guise of entertainment or personalization, have rapidly become a dark corner of the internet, enabling users to generate disturbingly realistic, explicit deepfakes of unsuspecting individuals, including minors.

From harmful gimmick to criminal offense

The new ban builds on the Online Safety Act, which already criminalizes the creation of non-consensual sexually explicit deepfakes. But until now, the developers and distributors of so-called nudifying or de-clothing apps operated in a legal gray area.

“Women and girls deserve to be safe online as well as offline,” said Technology Secretary Liz Kendall, who introduced the measure as part of a broader strategy to halve violence against women and girls.

“We will not stand by while technology is weaponised to abuse, humiliate, and exploit them,” she added. The government’s position is now clear: “Those who profit from them or enable their use will feel the full force of the law.”

What are nudification apps?

These apps use generative AI to strip clothes from photos of real people, often without their knowledge, and recreate the images as photorealistic nudes. The harm is both psychological and reputational, and the impact can be especially devastating when the images are circulated without context or control.

While some may dismiss the technology as a fringe novelty, experts and advocacy groups warn that it has already been weaponized, including in cases where manipulated images of minors were distributed as child sexual abuse material (CSAM).

Back in April 2025, Children’s Commissioner for England Dame Rachel de Souza called for an outright ban on these tools, arguing, “The act of making such an image is rightly illegal; the technology enabling it should also be.”

The growing threat of AI image abuse

According to the Internet Watch Foundation (IWF), almost one in five young people who used its Report Remove service in 2025 said their explicit images had been digitally manipulated.

Kerry Smith, the IWF’s Chief Executive, applauded the upcoming ban, saying: “We are also glad to see concrete steps to ban these so-called nudification apps which have no reason to exist as a product.”

She emphasized the broader risk these tools pose: “Apps like this put real children at even greater risk of harm, and we see the imagery produced being harvested in some of the darkest corners of the internet.”

Tech partnerships and digital safeguards

The government also announced plans to partner with technology firms to develop stronger safeguards against intimate image abuse. One of the key collaborators is SafeToNet, a UK-based company developing AI-powered tools to detect and block the creation of explicit content in real time.

Their software can even disable cameras on devices if it detects that sexual content is about to be captured. This builds on existing protections used by platforms like Meta to prevent minors from taking or sharing intimate images of themselves. However, some organizations believe that the legislation doesn’t go far enough.

Advocates push for stronger, default protections

While welcoming the announcement, Dr. Maria Neophytou, Director of Strategy at children’s charity NSPCC, expressed disappointment that the government didn’t go further.

“We’re disappointed not to see the same ambition when it comes to introducing mandatory device-level protections,” she said, referring to tools that could automatically block CSAM creation or distribution at the hardware level.

The NSPCC, along with other child protection organizations, is urging the government to require tech companies to implement stronger detection and prevention systems, particularly in private messaging environments where content moderation is limited.

Looking ahead: banning AI tools that generate CSAM

The government says this is just the beginning. Alongside the nudification app ban, officials are pursuing legislation to outlaw AI tools designed to create or distribute child sexual abuse material, and to make it “impossible” for children to take, view, or share nude images on their phones.

These proposed changes reflect an evolving understanding of how generative AI is accelerating the spread of digital exploitation, and how legislation must adapt quickly to keep up.

As AI continues to evolve, so too will the laws that govern its use, especially when it comes to protecting vulnerable people from high-tech forms of harassment and abuse.

 

Did this solution stand out? Share it with a friend or support our mission by becoming an Emissary.

Solutions News Source Print this article
More of Today's Solutions

Why tiger nuts might be the fiber-rich superfood your diet is missing

BY THE OPTIMIST DAILY EDITORIAL TEAM They may look like shriveled chickpeas and carry a name that sounds like a novelty snack, but tiger ...

Read More

Greenland permanently bans all oil and gas exploration

In exciting news for the planet and environmentalists, Greenland has announced it is permanently halting all new oil and gas exploration in the country. ...

Read More

Science Moms are on a mission to spread science-based climate optimism

When we feel overwhelmed and hopeless, many of us turn to our mothers or the mother figures we have in our lives for comfort ...

Read More

Sharks and hurricanes: How do marine predators cope with storms?

Hurricane Idalia wreaked havoc in Florida last Wednesday, leaving a trail of destruction in its wake as a powerful Category 4 storm. Homes were ...

Read More