Words by Daphne Chouliaraki Milner
ARTWORK BY MARCEL MCKENZIE
Publicly accessible AI art generators like DALL-E 2 and Midjourney have made it easy for anyone to create unique images that reimagine what the future might hold. But, the ethics remain murky.
For an artist, depicting the catastrophic scale of the climate crisis comes with a range of representational challenges.
Some artists, like Olafur Eliasson, navigate such difficulties by spreading a wide creative net, incorporating architecture, sculpture, light installation, and even natural formations into their work. Others, like art photographer Richard Mosse, rely on an equally vast set of technologies—in his case on multispectral cameras, botanical studies, aerial cameras, and heat-sensitive analogue film—to capture the nuanced effects global heating has on local ecosystems. Then, there are artists like LaToya Ruby Frazier, who spent five years photographing the effects of environmental racism on community health and wellbeing in her series, Flint Is Family. Meanwhile, artists like Mona Chalabi look to data to illustrate the connections between environmental justice and climate change.
This year, however, the release of publicly accessible AI art generating tools like DALL-E 2, Midjourney and Stable Diffusion have made it easier for anyone to create unique visualizations of imagined futures. All they have to do is type a series of descriptive prompts into a search bar. The art generators then draw on the millions of images on the open web that are tagged with similar words to generate one-off images in a similar style. As a result, synthetic images can, for instance, be created from a mix of stock imagery, news reportage, and independent artists’ works that have been uploaded onto the internet. But, when it comes to depicting the climate crisis, how far can AI art generators actually help us in imagining the future of the planet?
“The software is producing synthetic images, and to do so, it relies on a preexisting world of representation and image-making,” said Dr. Dylan Mulvin, who is a professor of Media and Communications at London School of Economics. “So, just as we can recognize some of the images it has generated as reminiscent of the ways films and comic books portray the apocalypse, the software is also wired to rely on those generic conventions to help us imagine possible climate futures in art. It’s possible that the images are not novel enough; that they’re too familiar. Equally, it’s possible that this same familiarity is what compels us to act.”
In one image, generated on Midjourney using the prompt “sea level rising flooded city because of the climate crisis,” a handful of highrises are shown standing tall against a dark, overcast sky with their bottom floors flooded by rising tides. In another, generated using the prompt “environmental racism,” a monochrome landscape image shows thick, black smoke billowing out of industrial chimneys. On the ground, mud—or, perhaps, oil—mirrors the overcast skies above. Both scenarios are not far from the current realities facing communities around the world, particularly in the context of the recent floods in Pakistan, Senegal, and Nigeria. The risk of oil spills also increases as climate change brings with it more monster storms year upon year.
“Activists have to cut through the noise in some way without access to the established channels of communication. That’s a really interesting way that these tools can be used.”
Beyond offering vivid representations of the future, AI tools like Midjourney and DALL-E 2 also lower the barriers of entry to art by offering anyone—including activists, grassroots organizers, and even children, whose futures will be most impacted by climate change—with imagination to conjure insightful prompts the opportunity to visualize our possible futures. In the words of Saatchi Art senior vice president Jeanne Anderson: “Younger generations who have grown up surrounded by the technologies and are just now entering the art world will not see this as evolution but as a natural extension of the world they know. [These] new tools applied in thoughtful ways, combined with human curatorship and immersive event design, puts us in a great position to meet the challenges and opportunities of art’s future.”
The fact that AI-generated art can emulate the style of celebrated artists and photographers, whose aesthetics have historically shaped people’s artistic sense and taste, means that these AI tools are also crucial for activists who are looking for ways to not only educate people about the reality of the climate crisis, but also make them care about the future of the planet. In this sense, AI art can be considered the next step in a long history of guerilla marketing within social and environmental justice movements— Extinction Rebellion’s hourglass logo on pamphlets distributed across city streets and on major billboards being a case in point.
“Activists have to cut through the noise in some way without access to the established channels of communication. They’re not necessarily going to make it onto the news. They’re not gonna be able to afford legitimate advertising space,” said Dr. Mulvin. “They need to find another way of reaching people. So, the images that activists rely on—things like logos and posters that become iconic—do so because they find traction in a moment. And if you rely on something familiar enough, like an artist’s style or an existing genre, and you add just enough novelty, people can recognize it but also not dismiss it as something they know. In other words, it’s just recognizable enough, but it’s conveying a new message. That’s a really interesting way that these tools can be used.”
But not everything is positive with AI art. Unlike protest posters, generative art is created using existing image datasets that overwhelmingly fail to take into account socioeconomic nuances and systematically exhibit racist, sexist, and stylistic prejudices. A recent example of this is ImageNet, one of the world’s leading dataset organizations, which was forced to remove 600,000 offensive images because of their complicity in reproducing racist and sexist power dynamics. Prompted by ImageNet Roulette, an anti-racist art project designed by researcher Kate Crawford and artist Trevor Paglen so as to expose the problematic ways in which image datasets have been trained to classify faces, ImageNet’s case highlights both the significance of ongoing efforts have been made to mitigate the prejudices baked into AI algorithms, and the fact that those efforts often remain reactive and, more often than not, tokenistic.
“AI isn’t taught using the real world as a reference. Rather, it learns from a mediated representation of the real world—with all the biases that entails.”
“If you were to take all the pictures that you are using from the web to train artificial intelligence, you would almost certainly end up with overrepresentation of Western culture, of white people, of men,” said Dr. Ricardo Baeza-Yates, director of research at the Institute for Experiential AI of Northeastern University. “This means that AI isn’t taught using the real world as a reference. Rather, it learns from a mediated representation of the real world—with all the biases that entails.”
For example, when asked to create images about “rich wealthy white corporate executives who are only interested in profits,” AI image generators rarely depicted white people in the artwork itself. Instead, it created abstracted images of explosions reminiscent of the mushroom cloud generated by nuclear bombs. By contrast, prompts that include the terms “refugees” or “migrants,” almost exclusively featured people of color, oftentimes in undignified or precarious situations. There are also geographical biases. When asked to generate art that shows “sea level rising” and a “flooded city,” Midjourney repeatedly produced art that resembles cities in the West.
In a vicious circle of prejudice reinforcement, such search results then continue to feed the biased systems that created the images in the first place. “Let’s say that 80% of all content people search for is famous people or memes—then the algorithms will start feeding you famous people or memes, reinforcing that behavior,” said Dr. Baeza-Yates. “The same will happen with generative art. We will end up with more imagery from the search terms people find interesting, search terms that will contain the biases of the people that type them, and this imagery will then shape the direction of the algorithm, further perpetuating those prejudices. In this way, images that reinforce society’s biases get more popular because they are more available.”
Reconsidering the dangers AI poses in reinforcing existing socioeconomic hierarchies, especially when the climate emergency is already disproportionately impacting people in the Global South, is paramount. After all, for AI art to positively contribute to the movement for climate justice, it needs to not only spread the urgent message of our planet’s troubling future, but also respect the very people who are disproportionately impacted.
“There’s an idea that the right photograph at the right time can compel understanding, empathy, and can force people to act out of the communication of vulnerability,” said Dr. Mulvin. “I wonder if that’s available to us if we’re creating synthetic images? As artists know, it is impossible to represent the scale of human pain. And so, what we resort to is often individual stories that stand in for these larger narratives of suffering and vulnerability. I do question whether a synthetic image can do that—especially given that it doesn’t have a connection to actual lived experience.”
Editor's Note: All images in this article were generated using AI art generator, Midjourney.