Right after the Ghibli-style AI image trend began to wear off, ChatGPT and similar tools found a new way to encourage people to upload their selfies into their system—this time, to make an action figure version of themselves.
The drill is always the same. A photo and a few prompts are enough for the AI image generator to turn you into a packaged Barbie-style doll with accessories linked to your job or interests right next to you. The last step? Sharing the results on your social media account, of course.
I must admit that the more I scroll through feeds filled with AI doll pictures, the more concerned I become. This is not only because it’s yet another trend misusing the power of AI. Millions of people have agreed to willingly share their faces and sensitive information simply to jump on the umpteenth social bandwagon, most likely without thinking about the privacy and security risks that come with it.
A privacy deceit
Let’s start with the obvious – privacy.
Both the AI doll and Studio Ghbli AI trend have pushed more people to feed the database of OpenAI, Grok, and similar tools with their pictures. Many of these had perhaps never used LLM software before. I certainly saw too many families uploading their kids’ faces to get the latest viral image over the past couple of weeks.
That’s true; AI models are known to scrape the web for information and images. So, many have probably thought, how different is it from sharing a selfie on my Instagram page?
There’s a catch, though. By voluntarily uploading your photos with AI generator software, you give the provider more ways to legally use that information—or, better yet, your face.
🚨 Most people haven’t realized that the Ghibli Effect is not only an AI copyright controversy but also OpenAI’s PR trick to get access to thousands of new personal images; here’s how:To get their own Ghibli (or Sesame Street) version, thousands of people are now voluntarily… pic.twitter.com/zBktscNOShMarch 29, 2025
As co-founder of the AI, Tech & Privacy Academy, Luiza Jarovsky explained, the Ghibli trend just exploded; by voluntarily sharing information, you give OpenAI consent to process it, de-facto bypassing the “legitimate interest” GDPR protection.
Put simply, in what Jarovsky described as a “clever privacy trick,” LLM’s libraries managed to get a spurge of fresh new images into their systems to use.
We could argue that it worked so well that they decided to do it again – and raise the bar.
Losing control – and not just of your face
To create your personal action doll, your face isn’t enough. You need to share some information about yourself to generate the full package. The more details, the closer the resemblance to your real you.
So, just like that, people aren’t only giving consent to AI companies to use their faces but also a sheer amount of personal information the software wouldn’t be able to collect otherwise.
As Eamonn Maguire, Head of Account Security at Proton (the provider behind one of the best VPN and secure email services on the market), points out, sharing personal information “opens a pandora’s box of issues.”
That’s because you lose control over your data and, most importantly, how it will be used. This might be to train LLMs, generate content, personalize ads, or more – it won’t be up to you to decide.
Check out my new #Barbie AI doll 🤩 🙌🏾Box includes:✔️ First ever elected African-Caribbean woman to serve as a UK Government Minister✔️ First Black woman to sit on the Panel of Chairs✔️ First MP to use BSL to ask a question✔️ Chair of London PLPMore upgrades to come 😁 pic.twitter.com/bvnynWwMlBApril 11, 2025
“The detailed personal and behavioral profiles that tools like ChatGPT can create using this information could influence critical aspects of your life – including insurance coverage, lending terms, surveillance, profiling, intelligence gathering or targeted attacks,” Maguire told me.
The privacy linked to how OpenAI, Google, and X will use, or misues, this data is only one side of the problem. These AI tools could also become a hackers’ honeypot.
As a rule of thumb, the greater the amount of data, the higher the possibility of big data breaches – and AI companies aren’t always careful when securing their users’ data.
Commenting on this, Maguire said: “DeepSeek experienced a significant security lapse when their database of user prompts became publicly accessible on the internet. OpenAI similarly had a security challenge when a vulnerability in a third-party library they were using led to the exposure of sensitive user data, including names, email addresses, and credit card information.”
This means that criminals could exploit people’s faces and personal information shared to create their action figure for malicious purposes, including political propaganda, identity theft, fraud, and online scams.
Worth the fun?
While it’s increasingly more difficult to avoid sharing personal information online and stay anonymous, these viral AI trends tell us the privacy and security implications aren’t perhaps properly considered by most people.
No matter if the use of encrypted messaging apps like Signal and WhatsApp keeps rising alongside the use of virtual private network (VPN) software – jumping on the latest viral social bandwagon looks more urgent than that.
AI companies know this dynamic well and have learned how to use it to their advantage. To attract more users, to get more images and data – even better, all of the above.
It’s fair to say that the Ghibli-style and action figures boom is only the start of a fresh new frontier for generative AI and its threat to privacy. I’m sure some more of these trends will implode among social media users in the next months.
As Maguire from Proton points out, the amount of power and data accumulating in the hands of a few AI companies is particularly concerning. “There needs to be a change – before it’s too late,” he said.
You might also like
https://cdn.mos.cms.futurecdn.net/EjbYJuBVUDtKm9gcmxYVFA.jpg
Source link
chiara.castro@futurenet.com (Chiara Castro)