If you assume that Apple and Google are slow to detect malicious apps on their platforms, new research suggests the problem is far worse. The Tech Transparency Project (TTP) has found that both the App Store and Google Play don’t just host Nudify apps. Their search and advertising systems actively point users towards them.
Nudify apps are AI tools that can digitally strip photos of real people of clothing. You can also create pornographic videos or create sexually explicit chatbots that use a person’s likeness. The scary part? 31 of the apps found by TTP were classified as suitable for minors.
How Apple and Google redirect users directly to these Nudify apps
TTP conducted searches on both app stores using terms such as “nudify,” “undress,” “deepfake,” and “AI NSFW.” About 40% of the top 10 results for each term returned apps that might depict women naked or scantily clad. But it doesn’t stop with search results. Paid ads for Nudify apps appeared in these results on both platforms. In Google’s case, this included a carousel of sponsored apps, some of which were overtly pornographic.
The autocomplete feature made things worse. When TTP entered “AI NS” into the App Store search box, it suggested “image to video ai nsfw,” which resulted in more nudity-provoking apps appearing in the top results. Apple controls all advertising in its App Store and has a stated policy against advertising that promotes adult content. Despite this, three of TTP’s App Store searches still returned a Nudify ad as the first result.
Why this is a bigger problem than you think
The apps identified in both stores were downloaded 483 million times and generated over $122 million in total revenue. Apple and Google take a cut through paid subscriptions and in-app purchases, which the TTP says may explain why enforcement has been lax.
After TTP and Bloomberg reported these apps, Apple removed 15 of them and Google blocked several others. However, both companies refused to explain why these apps passed the test or why the age rating allowed minors to download them.
How long before Apple and Google are forced to act?
The UK government has begun proposing and enacting laws against explicit deepfakes, and the US recently recorded its first criminal conviction under such a law. The pressure on Apple and Google to act more decisively will likely only increase.
Apple’s own enforcement record is already under scrutiny. A letter obtained by NBC News revealed that Apple privately threatened to remove Grok from the App Store in January over sexualized deepfakes and even dismissed xAI’s initial solution as inadequate. Apple ultimately let Grok stay, but with reports like this piling up, there’s no room for either company to look the other way.




