Life was much simpler when our social feeds just contained cat photos and memes. Yet over the past year, your feeds might have looked a lot different. You might have come across some photos that look too far out to be true. You might have even believed they’re real if you weren’t told any different.
With Stable Diffusion, Dall-E, and Midjourney getting better at generating images, it’s becoming harder to tell if some images you see online are real or fake.
In this article, we’ll share eight ai-generated images that went viral and explore the implications.
The Pope in Balenciaga Drip
We doubt many had “image of the pope wearing a puffer jacket” on their bingo cards this year. This AI photo of The Pope wearing a Balenciaga puffer jacket is perhaps the most AI photo so far this year.
It’s been viewed millions of times and liked hundreds of thousands of times.
The creator shared that he was just experimenting with Midjourney when he decided to make surreal art. “I just thought it was funny to see the Pope in a funny jacket.”
What makes it so impressive is how convincing the image is. The AI image doesn’t have any obvious markers that it was made by an AI photo generator, such as weird-looking hands or face.
The AI image displayed a hyperreal aesthetic that fooled many. Despite telltale signs of AI creation, such as smeared details, the image of the Pope in swaggy style spread like wildfire on social media platforms. It had Chrissy Teigen fooled, who tweeted “I thought the pope’s puffer jacket was real and didnt give it a second thought. no way am I surviving the future of technology”.
It was one of the first cases of widespread misinformation, highlighting the power and potential dangers of this technology.
Trump Being Arrested
Earlier this year, former President Donald Trump was waiting for his verdict on whether he would be indicted or not. As the world also waited on the outcome, someone created images of what it looked like if Trump was arrested.
Eliot Higgins, founder of the Bellingcat journalist platform, made huge waves when he tweeted on March 21, “Making pictures of Trump getting arrested while waiting for Trump’s arrest.”
He shared a series of images of Trump being dragged away by several policeman. The AI-generated content was made with Midjourney, and have been viewed 6.7 million times.
Higgins’ intention was just to show off how powerful the AI image creation tools were. We think he proved his point.
Paris Covered in Trash
At first glance, a series of striking images seemed to portray Paris, the renowned City of Light, as a metropolis overwhelmed by mountains of trash. The context was believable: garbage disposal workers had indeed gone on strike earlier in the year, leading to dramatic scenes of refuse on the streets and even in front of politicians’ homes. However, a deeper look revealed a twist in the narrative.
Released in August, these images, which showed the Eiffel Tower and the Louvre Museum encircled by waste, weren’t snapshots of a strike’s aftermath—they were generated by AI. The images originated from a TikTok video, which has been viewed over 450,000 times.
Fact-checking site AFP traced the images’ origins to the Facebook page of an artist known as Nik Art, who confirmed the images were made with Midjourney.
AI Wins Prestigious Sony Award
Judges were taken aback by Boris Eldagsen’s submission, titled “Pseudomnesia: The Electrician”. The image was so good that it won the Sony World Photography Award this year.
Except it didn’t, when Eldagsen turned down the award after revealing the winning piece was generated by AI. In subsequent interviews, he shared that he wanted to initiate a dialogue about the role of AI in the future of visual arts.
Eldagsen’s image, a haunting black-and-white portrayal, achieved viral status, demonstrating AI’s ability to mimic reality convincingly. This incident has amplified discussions about AI’s place in art, with many calling for clear differentiation between AI-generated and human-crafted works in competitions.
Pentagon in (Fake) Flames
Another shocking entry to this list perhaps has had the greatest impact in terms of bringing AI image creation to the public eye. In May this year, an image of an explosion near the Pentagon began making its rounds on Twitter.
While the official creator never revealed themselves (for obvious reasons), the images were quickly shown to be fake. That said, the image stirred panic and confusion, even causing ripples in the stock market.
It’s a stark reminder of the potential misuse of AI technology for creating convincing fakes, which, combined with the viral nature of platforms like Twitter, can spread faster than they can be fact-checked.
Entire Instagram Account of Fake People
Source: Ars Technica
Imagine the guilt of deceiving all your followers weighing so heavily on you that you felt compelled to come clean. In January 23, Jos Avery told Ars Technica that the images he posted on Instagram were “AI-generated, human-finished”.
Now, this wouldn’t normally be an issue, but Avery’s growing base of followers were amazed by his technical skill and mastery of his craft. The problem was he couldn’t bring himself to confess that AI did some heavy lifting in the artistic process.
If he didn’t reveal his behind-the-scenes process, would we ever know the truth?
Musk and Barra Hand in Hand
Seeing Elon Musk holding hands with his General Motors competitor, Mary Barra, certainly raised eyebrows when the images were first released.
This viral image, crafted by Brian Lovett with Midjourney, showed the two rival CEOs walking hand in hand as if they were dating. It was obvious these were AI images out of the sheer absurdity of the situation.
Musk took it in good humor, responding, “I would never wear that outfit”.
AI Model With More Followers Than You and Me
Seeing an influencer account is as common as people taking selfies. So Milla Sofia’s intro to her Instagram bio, “24-year old virtual girl living in Helsinki.” is nothing out of the ordinary.
“I’m an AI creation.” is where it gets interesting.
Sofia is an AI-generated model. All of her pictures across her social media platforms are also made with AI, albeit in the appropriate format that you’d expect of influencers.
She first became viral when she shared a TikTok clip dressed in summer clothes against a backdrop of a field of flowers. Other TikTok users heaped on the compliments, despite the public announcement that she’s an AI model.
Sofia is an example of how AI generated images, and people, are becoming more widely accepted in today’s society. In fact, some AI models have even gone on to secure sponsorships from brands.
How to Distinguish Between AI-Generated Images and Real Images
Understanding the distinction between AI-generated images and real ones used to be fairly simple. You’d just look for telltale signs of blurry details or unusual renderings of hands and feet.
But AI is getting smarter, and these clues are becoming less obvious. To help you spot ai-generated images, you’ll need to dig a bit deeper. You can do so by applying the SIFT method:
- Stop: Take a moment to pause and critically evaluate the image before accepting it as real.
- Investigate: Dig deeper into the image by examining its details, searching for any inconsistencies or unusual elements.
- Find better coverage: Look for additional sources or related information that can provide more context about the image’s origin and authenticity.
- Trace the original context: Trace the image back to its original source to verify its legitimacy and prevent the spread of misinformation.
This helps sort the real from the fake and keeps you clear of fake news.
How Major Tech Firms are Stopping Misinformation
While you can stay vigilant, it’s becoming harder to see if an image was made by generative AI tools. You can create images in easily accessible apps like Dall-E via a ChatGPT Plus subscription or Midjourney.
The fake Pentagon image caused panic for a short while as the internet wondered whether it was real. Thankfully, it was quickly debunked. But it begs the question of whether the large tech players can do anything to make it easy to distinguish between real and fake images.
One suggestion has been to implement invisible footprints or watermarks. And Google, Adobe, and Microsoft approve of endorsing AI content labels. Google plans to add disclosures to AI-generated images, while Adobe’s content credentials track AI edits, acting like a digital nutrition label.
However, these tools aren’t foolproof and manipulation is possible. Such labels could be key to discerning truth in our digitally complex world despite challenges.
As we’ve seen, AI’s ability to craft images has soared to new heights, cleverly weaving pixels into pictures that can fool even the sharpest of eyes.
These eight viral sensations are just a glimpse into a future where the line between an artist’s stroke and an algorithm’s code becomes indistinguishable. It’s a reminder that, while we can appreciate the wonders of such technology, we must also tread carefully, ensuring we keep one foot grounded in reality as we step into the increasingly complex world AI is painting for us.