Google’s visions of the apocalypse: AI is asked to ‘think’ what last SELFIE ever taken on earth would look like and proceeds to produce nightmarish images
- DALL-E AI, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions
- A TikTok user asked the AI to show what it thinks the last selfies taken will be
- It produced chilling scenes of bombs dropping and catastrophic weather, along with cities burning and even zombies
- Each image shows a person holding a phone in front of their face and behind them is the world coming to an end
Humans snapping photos of themselves with melting skin, blood smeared faces and mutated bodies, while standing in front of a world that is burning is what the DALL-E AI believes will be the last selfies taken at the end of times.
DALL-E AI, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to ‘show the last selfie ever taken.’
The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction.
One of the selfies is a animated image of a man wearing what looks like riot gear. He slowly moves his head around with a look as if his life is flashing before his eyes while bombs fall from the sky around him.
Each of the videos have been viewed hundreds of thousands of times, with users commenting on how horrifying each selfie is – one user felt the images are going to keep them up at night because they are so chilling.
Scroll down for videos
A TikTok user asked DALLE-AI to generate images of what it thinks the last selfies ever taken will be. One of the images is of a man in riot gear watching in horror as bombs drop behind him
Other users joked about taking a selfie at the end of times, with one commenting: ‘But first, lemme take a selfie’ (if no one gets this reference I’m gonna cry).’
TikTok user Nessa shared: ‘and my boss would still ask if I’m coming into work.’
However, not everyone felt light-hearted about what the end of time would look like.
User named Victeur shared: ‘Imagine hiding in the dark for the war, not having seen your face in years and seeing this when you take a last picture of yourself.’
DALLE-AI is a system that can create pictures just by a person typing in specific descriptions. This result shows a horrified man who could be running away from the devastation behind him
The last selfies also show people with blood smeared on their faces and cities on fire
Most of the commenters see the fun side of the images, but there has been a dark side uncovered with DALL-E – its racial and gender bias.
The system is public and when OpenAI launched the second version of the AI it encouraged people to enter descriptions so the AI can improve on generating images over time, NBC News reports.
However, people started to notice that the images were biased. For example, if a user typed in CEO, DALL-E would only produce images of white males and for ‘flight attendant,’ just images of women were presented.
OpenAI announced last week that it was launching new mitigation techniques to help DALL-E create more diverse images and claims the update ensures users were 12 times more likely to see images with more diverse people
The nightmarish images, which show zombies standing in front of burning cities were created by DALL-E AI
Some of the disturbing selfies also look like a zombie with missing eyes and skin
The images are so chilling, some TikTok users said they will now have nightmares after seeing them
The original version of DALL-E, named after Spanish surrealist artist Salvador Dali, and Pixar robot WALL-E, was released in January 2021 as a limited test of ways AI could be used to represent concepts – from boring descriptions to flights of fancy.
Some of the early artwork created by the AI included a mannequin in a flannel shirt, an illustration of a radish walking a dog, and a baby penguin emoji.
Examples of phrases used in the second release – to produce realistic images – include ‘an astronaut riding a horse in a photorealistic style’.
On the DALL-E 2 website, this can be customized, to produces images ‘on the fly’, including replacing astronaut with teddy bear, horse with playing basketball and showing it as a pencil drawing or as an Andy Warhol style ‘pop-art’ painting.
DALL·E 2 has learned the relationship between images and the text used to describe them,’ OpenAI explained.
‘It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.’
HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.