From Trump Nevermind Babies to Deep Forgeries: DALL-E and the Ethics of AI Art | Artificial Intelligence (AI)

Want to see a picture of Jesus Christ laughing at a meme on his phone, Donald Trump as the Nevermind baby, or Karl Marx being laughed at at the Nikelodeon Kid’s Choice awards?

If you’ve been on Twitter or Instagram for the past few weeks, it’s hard to miss out-of-the-ordinary formulations of these kinds of scenarios in the form of AI art.

DALL-E (and DALL-E mini), the creator of these works of art, is a neural network that can take a text sentence and turn it into an image. It was trained by looking at millions of images on the Internet along with accompanying text and learned to create images of things you would never expect to combine, such as a avocado armchair

Text-to-image technology is advancing at a rapid pace, and the full DALL-E model can produce terrifyingly clear images based on the input you provide, while the mini version is still clunky enough to capture the odd internet style that she immediately makes meme state. The best examples of this can be found on the r/weirdall subreddit

But experts say the technology poses ethical challenges.

Prof Toby Walsh, AI researcher and author of: a book on the morality of AI, says the kind of technology that powers DALL-E makes it easier to create fake images.

“We see that deep fakes are constantly being used, and the technology will make it possible to synthesize still images, but eventually also video images [more easily] by bad actors,” he says.

DALL-E has content policies in place that prohibit bullying, harassment, the creation of sexual or political content, or the making of images of people without their consent. And while Open AI has limited the number of people who can sign up with DALL-E, the lower quality replica, DALL-E mini, is open access, meaning people can produce anything they want.

“It’s going to be very difficult to make sure people don’t use them to create images that people find offensive,” Walsh says.

dr. Oliver Bown, a researcher in computational creativity at the University of New South Wales, says the nature of the neural networks in AI makes it difficult to prevent DALL-E from creating offensive images, but it is possible to prevent the person who does not open and share the image.

“Obviously, you could just have a filter at the end that tries to filter out things that are bad.”

Walsh says that in addition to the regulatory framework and company policies surrounding the use of the technology, the public also needs to be educated to be more critical of what they see online.

Sign up to receive Guardian Australia’s Weekend Culture and Lifestyle email

“When I have” [an image] from the BBC website, the Guardian website, I hope they’ve done their homework and I could be a little more confident than if I took it off Twitter. [In that case] I ask all questions about: [whether this is] a bit of fake content or not.”

The other big ethical issue Walsh sees coming is the potential for text-to-image AI to replace jobs in graphic design.

“You can imagine that more of us will be able to do graphic design because we could say ‘paint me a picture’ with the spec whenever we want, and we’ll get that picture. Whereas there used to be a graphic designer who took that picture,” he says.

“Graphic design will not go away. It will lead to even more graphic design because we all have access to these tools, but graphic designers may have less work themselves.”

But Bown says this new technology also enables “rapid creativity,” meaning the thought that goes into the image request will lead to more creativity.

“This new challenge is for creative people to think about what they want to put into a system like this,” he says.

The clunky look of DALL-E mini-image generations is also becoming an internet art form of its own, Bown says.

“I can imagine this would be huge for something like Instagram or just messaging your friends directly if you’re trying to send memes.

“There will be all kinds of crazy subcultures of image generation. So if it produces this kind of blurry, slightly distorted image with people’s arms in the wrong places, that’s okay, we’re just getting used to that aesthetic.”

Leave a Comment

Your email address will not be published.