Tech Giants Take Action: 55 AI Nudify Apps Pulled Amid Rising User Safety Risks

Tech Giants Take Action: 55 AI Nudify Apps Pulled Amid Rising User Safety Risks

Apple and Google removed 55 and 47 nudify apps, respectively, after a TTP investigation revealed unauthorized AI-generated explicit images, raising serious privacy concerns.

NeboAI I summarize the news with data, figures and context
IN 30 SECONDS

IN 1 SENTENCE

SENTIMENT
Neutral

𒀭
NeboAI is working, please wait...
Preparing detailed analysis
Quick summary completed
Extracting data, figures and quotes...
Identifying key players and context
DETAILED ANALYSIS
SHARE

NeboAI produces automated editions of journalistic texts in the form of summaries and analyses. Its experimental results are based on artificial intelligence. As an AI edition, texts may occasionally contain errors, omissions, incorrect data relationships and other unforeseen inaccuracies. We recommend verifying the content.

In a significant move to protect user privacy, Apple and Google have removed a total of 75 apps from their platforms that exploited artificial intelligence to create unauthorized nude images. The Tech Transparency Project's investigation identified 55 of these apps on Google Play and 47 in the App Store, revealing a concerning trend of misuse involving AI technology.

Following the TTP's notification, Apple took action by removing 28 offending apps and cautioning developers about potential future removals due to similar violations. A Google spokesperson confirmed that several apps were suspended and that the company is currently assessing the situation based on the findings. Notably, some apps were reinstated after complying with the necessary guidelines.

Experts from TTP criticized both tech giants for allowing such harmful applications to remain accessible, highlighting that they can convert innocent images into explicit content without consent. Among the problematic apps, 14 were reported to originate from China, raising alarms over data privacy since local laws grant the government access to company data, potentially placing sensitive images at risk.

As the debate over the ethical use of AI continues, past incidents, including a backlash against the chatbot Grok for similar functionalities, indicate increasing scrutiny and legal actions related to deepfake pornography. In a related legal effort, San Francisco City Attorney David Chiu initiated a lawsuit in August 2024 against the owners of 16 related websites.

Want to read the full article? Access the original article with all the details.
Read Original Article
TL;DR

This article is an original summary for informational purposes. Image credits and full coverage at the original source. · View Content Policy

Editorial
Editorial Staff

Our editorial team works around the clock to bring you the latest tech news, trends, and insights from the industry. We cover everything from artificial intelligence breakthroughs to startup funding rounds, gadget launches, and cybersecurity threats. Our mission is to keep you informed with accurate, timely, and relevant technology coverage.

Press Enter to search or ESC to close