Last week, a 31-year-old construction worker took a few psychedelics and thought it might be fun to use AI image generator Midjourney to create a photorealistic image of Pope Francis wearing a big white Balenciaga-style puffer jacket. A lot of people who saw it thought it was fun, too, so they spread it around social media. Most of them probably had no idea that it wasn’t real.
Now, the Pope having that drip isn’t the worst nor most dangerous deployment of photorealistic AI-generated art, in which new images are created from text prompts. But it is an example of just how good this technology is becoming, to the point that it can even trick people who are usually more discerning about spreading misinformation online. You might even call it a turning point in the war against mis- and disinformation, which the people fighting were, frankly, already losing simply because social media exists. Now we have to deal with the prospect that even the people who are fighting that war may inadvertently help spread the disinformation they’re trying to combat. And then what?
It’s not just Coat Pope. In the last two weeks, we’ve seen several ominous AI-image stories. We had Trump’s fake arrest and attempted escape from the long AI-generated arm of the law, which was capped by a set of poorly rendered fingers. We had Levi’s announcing it would “supplement” its human models with AI-generated ones in the name of diversity (hiring more diverse human models was apparently not an option). Microsoft unleashed its Bing Image Creator in its new AI-powered Bing and Edge browser, and Midjourney, known for its photorealistic images, released its latest version.
Finally, there’s the news that AI image generators are getting better at drawing hands, which had been one of the tell-tale signs to detect if an image is fake. Even as convincing as Coat Pope appeared, a close look at his right hand would have revealed its AI origins. But soon, we may not even have that. Levi’s will be able to use AI models to show off its gloves, while the rest of us might be thrown into a new world where we have absolutely no idea what we can trust — one that’s even worse than the world we currently inhabit.
“We’ve had this issue with text and misinformation on social platforms. People are conditioned to be skeptical with text,” said Ari Lightman, a professor of digital media and marketing at Carnegie Mellon University. “An image ... adds some legitimacy in the user’s mind. An image of video creates more resonance. I don’t think our blinders are up yet.”
In just a few short years, AI-generated images have come a long way. In a more innocent time (2015) Google released “DeepDream,” which used Google’s artificial neural network programs — that is, artificial intelligence that’s been trained to learn in a way that mimics a human brain’s neural networks — to recognize patterns in images and make new images from them. You’d feed it an image, and it would spit back something that resembled it but with a bunch of new images weaved in, often things approximating eyeballs and fish and dogs. It wasn’t meant to create images so much as to show, visually, how the artificial neural networks detected patterns. The results looked like a cross between a Magic Eye drawing and my junior year of college. Not particularly useful in practice, but pretty cool (or creepy) to look at.
These programs got better and better, training on billions of images that were usually scraped from the internet without their original creators’ knowledge or permission. In 2021, OpenAI released DALL-E, which could make photorealistic images from text prompts. It was a “breakthrough,” says Yilun Du, a PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory who studies generative models. Soon, not only was photorealistic AI-generated art shockingly good, but it was also very much available. OpenAI’s Dall-E 2, Stability AI’s Stable Diffusion, and Midjourney were all released to the general public in the second half of 2022.
The expected ethical concerns followed, from copyright issues to allegations of racist or sexist bias to the possibility that these programs could put a lot of artists out of work to what we’ve seen more recently: convincing deepfakes used to spread disinformation. And while the images are very good, they still aren’t perfect. But given how quickly this technology has advanced so far, it’s safe to assume that we’ll soon be hitting a point where AI-generated images and real images are nearly impossible to tell apart.
Take Nick St. Pierre’s work, for example. St. Pierre, a New York-based 30-year-old who works in product design, has spent the last few months showing off his super-realistic AI art creations and explaining how he got them. He may not have the artistic skills to compose these images on his own, but he has developed a skill for getting them out of Midjourney, which he says he uses because he thinks it’s the best one out there. St. Pierre says he dedicated the month of January to 12-hour days of working in Midjourney. Now he can create something like this in just about two hours.
“When you see a digital image on the internet and it’s AI generated, it can be cool, but it doesn’t, like, shock you,” St. Pierre said. “But when you see an image that’s so realistic and you’re like, ‘wow, this is a beautiful image’ and then you realize it’s AI? It makes you question your entire reality.”
But St. Pierre doesn’t usually put real people in his work (his rendering of Brad Pitt and John Oliver as female Gucci models from the ‘90s is an exception, though few people would look at either and think they were actually Brad Pitt or John Oliver). He also thinks social media companies will continue to develop better tools to detect and moderate problematic content like AI-generated deepfakes.
“I’m not as concerned about it as a lot of people are,” he said. “But I do see the obvious dangers, especially in the Facebook world.”
Du, from MIT, thinks we’re at least a few years away from AI being able to produce images and videos that flood our world with fake information. It’s worth noting that, as realistic as St. Pierre’s images are, they’re also the end product of hours and hours of training. Coat Pope was made by someone who said he’d been playing around with Midjourney since last November. So these aren’t yet images that anyone can just spin up with no prior experience. Lightman, from Carnegie Mellon, says the question now is whether we’ll be ready for that possibility.
Of course, a lot of this depends on the companies that make these programs, the platforms that host them, and the people who create the images to act responsibly and do everything possible to prevent this from happening.
There are plenty of signs that they won’t. Bing Creator won’t generate an image of a real person, but Midjourney — the source of both Coat Pope and Fugitive Trump — clearly does (it has since banned the creators of both images from the platform but did not respond to request for comment). They all have their own rules for what is or isn’t allowed. Sometimes, there aren’t any rules at all. Stable Diffusion is open source, so anyone with any motives can build their own thing on top of it.
Social media platforms have struggled for years to figure out what to do about the disinformation campaigns that run wild through them, or if and how they should curb the spread of misinformation. They don’t seem very well-equipped to deal with deepfakes either. Expecting all of humanity to do the right thing and not try to trick people or use AI images for malicious purposes is impossibly naive.
And while many leaders of the AI movement signing a letter from an effective altruism-linked nonprofit that urged a six-month moratorium on developing more advanced AI models is better than nothing, it’s also not legally compelling. Nor has it been signed by everyone in the industry.
This all assumes that most people care a lot about not being duped by deepfakes or other lies on the internet. If the past several years have taught us anything, it’s that, while a lot of people think fake news is a real issue, they often don’t care or don’t know how to check that what they’re consuming is real — especially when that information conforms to their beliefs. And there are people who are happy enough to take what they see at face value because they don’t have the time or perhaps the knowledge to question everything. As long as it comes from a trusted source, they will assume it’s true. Which is why it’s important that those trusted sources are able to do the work of vetting the information they distribute.
But there are also people who do care and see the potential damage that deepfakes that are indistinguishable from reality pose. The race is on to come up with some kind of solution to this problem before AI-generated images get good enough for it to be one. We don’t yet know who will win, but we have a pretty good idea of what we stand to lose.
Until then, if you see an image of Pope Francis strolling around Rome in Gucci jeans on Twitter, you might want to think twice before you hit retweet.
A version of this story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!
At Vox, we believe that everyone deserves access to information that helps them understand and shape the world they live in. That's why we keep our work free. Support our mission and help keep Vox free for all by making a financial contribution to Vox today.
Yes, I'll give $120/year
Yes, I'll give $120/year
Deepfakes use AI to replace the likeness of one person with another in video or audio. There are concerns that deepfakes can be used to create fake news and misleading videos.Can AI make realistic images? ›
Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction.What is the most realistic AI generated image? ›
Bing Image Creator is the best overall AI art generator due to it being powered by OpenAI's latest DALL-E technology. Like DALL-E 2, Bing Image Creator combines accuracy, speed, and cost-effectiveness and can generate high-quality images in just a matter of seconds.What does Pope Francis say about artificial intelligence? ›
“I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity; we cannot dismiss it,” he stated.What AI turns pictures into real people? ›
Generate Realistic Face Images From Text
Powered by artificial intelligence and deep machine learning, Fotor's AI face generator lets you create realistic human faces from scratch in seconds.
- Put on your longest lens.
- Set the camera to aperture priority.
- Set the aperture as low as it will go.
- Step as close to the subject as you can while still allowing the lens to focus.
- Place the subject far away from anything in the background.
- Put the focus point on the subject.
- Take the picture.
At the moment, works created solely by artificial intelligence — even if produced from a text prompt written by a human — are not protected by copyright. When it comes to training AI models, however, the use of copyrighted materials is fair game.How is everyone getting AI pictures of themselves? ›
This month, millions of people came face to face with versions of themselves generated by artificial intelligence thanks to the app Lensa, which uses machine learning to spit out illustrations based on photos you provide.Are AI generated images illegal? ›
US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.What is the most realistic human AI? ›
Sophia. Sophia is considered the most advanced humanoid robot. Sophia debuted in 2016, she was one of a kind, and her interaction with people was the most unlikely thing you can ever see in a machine.
An AI image generator uses an advanced machine learning algorithm known as artificial neural networks (ANN) to generate new images. The ANN, which is modeled on biological neural networks, is trained on a large number of image-text pairs.How do you make AI art look realistic? ›
- Choose the Highest Resolution Output. ...
- Upscale Your Image. ...
- Use Outpainting to Expand Your AI Artwork. ...
- Fix Mistakes Using Photoshop. ...
- Download the Image Without a Watermark. ...
- Write a Good Prompt. ...
- Learn About Different Art Styles. ...
- Collect Inspiration for Prompts.
Speaking via video link to a summit in London, Musk said he expects governments around the world to use AI to develop weapons before anything else. Elon Musk has hit out at artificial intelligence (AI), saying it is not "necessary for anything we're doing".What does Elon Musk believe about artificial intelligence? ›
At a 2014 aerospace event at the Massachusetts Institute of Technology, Mr. Musk indicated that he was hesitant to build A.I himself. “I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”What does Bill Gates say about artificial intelligence? ›
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it.How does AI mimic humans? ›
Artificial Neural Networks Mimic the Human Brain
Artificial Neural Networks (ANNs) are computing systems made up of connected units called artificial neurons, which are modeled on the actual neurons of the human brain. These neurons are made up of dozens — sometimes hundreds — of layers of interconnected algorithms.
With Fake Face Generator, anyone can produce realistically generated faces on the go. This app uses AI-based technology for superior results and is capable of making a plethora of distinct images in quick time. Generated faces come from all age groups, genders and styles – from kids to adults and scruffy to blond hair.Why do AI generated images look so weird? ›
The reason some of these images look so frightening is the same reason they are not in reality — these models don't actually “know” anything, at least not the way we use the word. These images are products of some computationally advanced algorithms and calculators that can track and compare pixel values.What happens when a real image is formed? ›
A real image occurs where rays converge, whereas a virtual image occurs where rays only appear to diverge. Real images can be produced by concave mirrors and converging lenses, only if the object is placed further away from the mirror/lens than the focal point, and this real image is inverted.Can we see real image directly? ›
No, real images cannot be seen without a screen. in a refracting telescope after passing through the objective lens, the image is not seen as the eye piece lens forms a virtual image which does not require a screen.
A real image is always inverted. Also, magnification is given by height of object : height of the image. As the height of given image is negative, therefore, the magnification is also negative.Can AI take over photography? ›
AI is unlikely to replace photographers any time soon. While it can enhance certain aspects of the creative process, it lacks the creativity, emotion, and human interaction that make photography such a unique and valuable art form.Can images be used without permission? ›
If you reproduce, publish or distribute a copyrighted work (or a work derived from a copyrighted work) without permission or a valid license – that's copyright infringement. If you want to use an image that's copyright protected, first get a license or permission to use it from the creator.What are the risks of AI art? ›
AI art is currently incapable of developing its own style since it's a machine, not a person who can spend years developing a style that distinguishes them from other artists. Furthermore, AI art lacks the ability to bring about emotional responses from viewers.How do I know if a photo is AI? ›
Watch for Wonky Fingers and Teeth
Since data sets that train AI systems tend to only capture pieces of hands, the tech often fails to create lifelike human hands. This can lead to images with bulbous hands, stretchy wrists, spindly fingers or too many digits — hallmark signs that an AI-created image is a fake.
Can I create my own AI? Yes, you can create your own AI system by following the steps outlined in this article. However, creating an AI system requires technical expertise in fields such as machine learning, deep learning, and natural language processing.What app is everyone using to AI themselves? ›
There are two AI art generators dominating everyone's Instagram feeds. The first is called Lensa, a photo editing app which takes your selfies and turns them into what Lensa refers to as “magic avatars”. The app was launched in 2018 by Prisma Labs, but gained renewed interest this month after launching its avatar tool.Are deepfakes illegal? ›
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.What is the controversy with AI image generators? ›
AI-generated images breach copyright law, artists say Artificial intelligence has advanced enough to create a seemingly original artwork in the style of living artists within minutes. Some artists argue that these AI models breach copyright law.Who has the rights to AI generated images? ›
AI art cannot be copyrighted. The question of who owns art created by AI is a complex and controversial issue. While AI is responsible for generating the artwork, it is ultimately the human creators who programmed and trained the AI algorithms.
The AI can outsmart humans, finding solutions that fulfill a brief but in ways that misalign with the creator's intent. On a simulator, that doesn't matter. But in the real world, the outcomes could be a lot more insidious. Here are five more stories showing the creative ingenuity of AI.What is the most advanced AI in existence? ›
- GPT-3: The first on our list is GPT-3, which stands for Generative Pre-trained Transformer 3 is OpenAI's third generation of generative language models. ...
- AplhaGO: DeepMind, a part of Alphabet Inc., created AlphaGo, an artificial intelligence.
OpenAI Unveils Multimodal LLM GPT-4: The Most Advanced AI Yet.How long does it take for an AI to generate an image? ›
It typically takes between half an hour to several hours to build the initial model from your uploaded photos. You'll receive an email when your images have been created. Once the model is ready, generating images for any theme you select usually takes a few minutes.What can AI generated images be used for? ›
AI image generation has similar advantages to AI text generation: you can use it as inspiration, have it create a first draft, or even prompt your way to a final result.What is the app that turns drawings into real life? ›
About us. Doodlar uses machine learning to compare your drawings to thousands of other doodles to guess what you've drawn. We use augmented reality to bring your drawings to life. Interact, play and learn about the things you draw.How do I stop AI from taking my art? ›
Glaze is a tool that can help artists protect their work from AI art generators. The app works by applying subtle changes to the artwork—changes so minor that they're barely noticeable to humans—that can easily confuse AI software.Is there a way to reverse AI art? ›
Simply select “AI Filter” from the Effects menu on the Camera screen. The reverse of AI-generated art is created using the AI Art filter on the Dream by WOMBO app.Do cameras produce real or virtual images? ›
A real image is formed outside the system, where the emerging rays actually cross; such an image can be caught on a screen or piece of film and is the kind of image formed by a slide projector or in a camera.Is mirror image real or camera image real? ›
The mirror image is a virtual image. The camera image is real. In a mirror, the image formed is always equal to the size of the object. The image formed by the camera may or may not be equal to the size of the object.
GANs have two neural networks—a generator and a discriminator—which are pinned against each other. To synthesise an image of a fictional person, GANs begin by randomly selecting an array of pixels out of which they learn how to iteratively synthesise a realistic face.Are deepfake pictures illegal? ›
But despite some states taking steps forward, there is no federal law tackling deepfake porn, which means the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.Is deepfake allowed? ›
And the technology can itself be protected by intellectual property rights. Using deepfakes maliciously could also constitute fraud, defamation, identity theft, and other civil and criminal violations.”How is deepfake detected? ›
Face and body
Generating forgeries of a person's entire persona involves quite a lot of work, so most deepfakes are limited to face substitutions. So, one way to detect forgery is to identify incongruities between the proportions of the body and face, or between facial expressions and body movements or postures.
No, real images cannot be seen without a screen. in a refracting telescope after passing through the objective lens, the image is not seen as the eye piece lens forms a virtual image which does not require a screen.What devices produce a real image? ›
Real images can be produced by concave mirrors and converging lenses, only if the object is placed further away from the mirror/lens than the focal point, and this real image is inverted.Can a virtual image be seen by the human eye? ›
The eye's biconvex lens converges the diverging rays that reach the eye, causing them to fall on the retina. Therefore, we can see virtual images with our eyes.What is the most accurate way to see yourself? ›
Hold two hand mirrors in front of you with their edges touching and a right angle between them like the two covers of a book when you're reading. With a little adjustment you can get a complete reflection of your face as others see it. Wink with your right eye. The person in the mirror winks his or her right eye.Is the mirror how people see you? ›
The answer is simple: Mirrors. There's a difference between your image in the mirror and in photos. The image you see in the mirror is reversed compared to the image that others see face-to-face with you.How does others see me? ›
People see you inverted in real life, or the opposite of your mirror image. When you look in a mirror, what you're actually seeing is a reversed image of yourself. As you're hanging out with friends or walking down the street, people see your image un-flipped.
“My AI is an experimental chatbot that learns over time and can occasionally produce incorrect answers. If Snapchatters experience inaccurate responses, we encourage them to report it using our in-app tool.”