We tend to think of the Apple App Store and Google Play Store as digital “walled gardens” – safe, curated spaces where dangerous or objectionable content is filtered out long before it lands on our screens. But a grim new analysis from the Tech Transparency Project (TTP) suggests there are some serious cracks in the walls. The report reveals a disturbing reality: Both big box stores are currently infested with dozens of AI-powered “Nudify” apps. These are not obscure tools hidden on the dark web; They are right there and allow anyone to take an innocent photo of a person and digitally remove their clothing without their consent.
Earlier this year, the conversation around this technology reached its peak when Elon Musk’s AI Grok was caught posting similar sexualized images on the platform. A simple search for terms like “undress” or “nudify” in the app stores leads to a long list of software designed specifically for creating non-consensual deepfake pornography.
The scale of this industry is frankly breathtaking
We’re not talking about a few fraudulent developers slipping through the cracks. According to the data, these apps have been downloaded over 700 million times in total. They have generated an estimated revenue of $117 million. And here’s the uncomfortable truth: Since Apple and Google typically charge a commission for in-app purchases and subscriptions, they effectively profit from the creation of non-consensual sexual images. Every time someone pays to “take off” a photo of a classmate, a colleague, or a stranger, the tech giants get their cut.
The human cost of this technology cannot be overstated. These tools turn ordinary photos into weapons. A selfie from Instagram or a picture from a yearbook can be turned into explicit material designed to harass, humiliate, or blackmail victims. Advocacy groups have been screaming about this for years, warning that “AI nudity” is a form of sexual violence that disproportionately victims women and, shockingly, minors.
So why are they still there?
Both Apple and Google have strict policies on paper that prohibit pornographic and exploitative content. The problem is enforcement. It has become a digital game from Whac-A-Mole. When a high-profile report is published, the companies may ban some specific apps, but the developers often just tweak the logo, change the name slightly, and re-upload the exact same code a week later. The automated testing systems seem completely unable to keep up with the rapid development of generative AI.
For parents and everyday users, this is a wake-up call. We can no longer assume that an app is safe or ethical just because it is available in an “official” store. As AI tools become more powerful and more accessible, the protections we have relied on in the past are failing. Until regulators step in—or until Apple and Google decide to prioritize security over commission fees—our digital images will remain uncomfortably vulnerable.




