Social media users have recently embraced ChatGPT's Studio Ghibli image generation feature, which was launched on March 26. This new tool has been creating quite a stir online, with millions of users transforming their photos into Ghibli-style animations. However, on March 30 at around 4 PM, the surge in popularity led to a crash of ChatGPT's servers. However, cybersecurity experts are raising alarms about the potential risks associated with using Ghibli effects. They warn that this AI art generator from OpenAI could put users' personal photos in jeopardy.
Many experts have taken to social media to express concerns about security, suggesting that this trend might allow ChatGPT access to a vast number of people’s personal images, which could be utilised to further train AI models.
As a result of this trend, numerous users are inadvertently sharing their personal photos and unique facial data with OpenAI, posing significant risks in the long run. Critics of the technology argue that OpenAI's data collection methods can potentially sidestep issues of AI copyright, granting the company the ability to use submitted photographs without facing legal limitations.
By circumventing GDPR regulations, the Ghibli trend could provide OpenAI with the leeway to utilize users' facial data without their full consent. This is prompting many cybersecurity advocates to urge users to steer clear of this trend and refrain from uploading personal images.
What are the potential risks?
- Privacy Violations: Users' photos may be used for various purposes without their permission.
- Identity Theft: Personal information could be misappropriated by malicious actors.
- Data Security: User information may end up in the hands of hackers.
- Creation of Fake Profiles: Photos could be misused to create deceptive online identities.
- Legal Issues: Users may face legal complications if their photos are used inappropriately.
ALSO READ: Facebook, Instagram may require payment: Users concerned about Meta's policy shift