Photograph by Tyrone Williams / Kintzing
Words by Adah Parris
As a child, injustice always made me cry. It didn’t have to be my story; I had a deep visceral “knowing” when something was wrong, unjust, or just unfair. That thread runs through everything that I am today. It helps me to recognize patterns in seemingly polarizing realms, and informs my philosophy, Cyborg Shamanism™: a framework I developed that merges ancient and ancestral wisdom with modern digital technologies, and advocates for a regenerative and inclusive future for both humans and the more-than-human world.
So when, on May 13, 2024, OpenAI started rolling out ChatGPT-4o—an updated model of its artificial intelligence platform that understands and generates natural language—I was cautiously curious. Its previous incarnation, ChatGPT4, had been rolled out less than one year earlier and was widely considered a huge technological step forward, but the highly publicized issues around bias and fairness, ethical concerns, and privacy issues still remain largely unresolved.
I view AI much like water. It’s everywhere. If we poison a river upstream, everything downstream will be contaminated to some degree. The same goes for AI. The newer version of ChatGPT4, ChatGPT-4o, offers faster processing speeds and enhanced features across text, voice, and vision. But for myself and others, several questions have been left unanswered, including: Why would a for-profit company update and release a knowingly flawed platform into the world for free?
We could, of course, choose to believe that this is about democratizing “the future of interaction between ourselves and the machines,” a grand narrative put to us by OpenAI chief technology officer, Mira Murati. But the company’s self-proclaimed benevolence has been undermined by its many controversies, some of which have made headlines in recent weeks. Soon after its launch, GPT-4o was goaded into producing racist content by Radio Canada. And in recent weeks, Hollywood actor Scarlett Johansson slammed the company following its debut of an audible version of ChatGPT, named Sky, with a similar-sounding voice to Johansson’s. Following unwanted media attention, OpenAI eventually removed the voice from its platform (though the company has denied the accusation).
AI data centers have been found to pollute the environment at a faster rate than other online activity with some AI search engines requiring up to 10 times more power than a traditional Google search.
There are also urgent environmental concerns that need to be addressed. AI data centers have been found to pollute the environment at a faster rate than other online activity with some AI search engines requiring up to 10 times more power than a traditional Google search.
Over the past year, my own work has navigated the rapid ebbs and flows of technological development, and—informed by my ancestry in Sub-Saharan Africa, Southern India, the Caribbean, and South America as well as other Indigenous groups with a deep, interconnected relationship between the Earth and all of its inhabitants—it is driven by the question: “What kind of ancestor do you want to be?”
As the development of AI continues to accelerate, it is imperative we look to traditional knowledge for guidance. This means incorporating elemental wisdom—Ether (ethical pause), Air (flow of ideas), Fire (transformation), Water (fluidity and self-reflection), and Earth (grounding and kinship)—in every aspect of how AI is designed, developed, and distributed.
I am not alone in the mission to root the development of artificial intelligence in elemental wisdom. Tyson Yunkaporta, an Aboriginal man from the Apalech Clan of Far Northern Queensland Australia, is a father, partner, author, scholar, and founder of the Indigenous Knowledges Research Centre (IKS) at Deakin University in Melbourne Australia. Yunkaporta has long drawn on traditional wisdom to challenge the status quo, especially at the intersection of artificial and ancestral intelligence.
“As Indigenous people in this space of the intersection of ancestral and artificial intelligence, [it] is really tricky,” says Yunkaporta. “Because we’re dealing with different codes, kinship relationships, and our words don’t mean the same thing.”
For Yunkaporta, the problems caused by artificial intelligence today are rooted in the egoistic notion that these technologies are new. They are not. For millennia, humans—and, specifically, Indigenous communities—have created and used intelligent machines or technologies to process mathematical equations and solve problems far beyond our cerebral capabilities. The Ishango Bone, which was discovered in the Democratic Republic of Congo dates back to around 20,000 BCE, is believed to have been used for arithmetic calculations, while the Quipu, an ancient Incan system of knotted strings helped them to perform complex calculations. Both of these ancient “technologies” demonstrate advanced mathematical and data management skills long before artificial intelligence.
Today, some of these AI companies are generating billions of dollars against a backdrop of under-regulation as well as a lack of ethical standards, accountability, privacy, security, and fact-checking. It is, for example, academically accepted that there are prejudices in suggestions of racial variations in IQ scores. The introduction of Intelligence Quotient (IQ), where the tests were biased and culturally specific, favored the lived experiences of white, middle-class people. These tests were then presented as scientific fact, but in reality were nothing more than pseudoscience. Today, LLMs that prioritize speed over accuracy inadvertently validate and spread controversial pseudoscience instead of intellectually-robust and proven concepts such as critical race theory.
AI lacks the understanding of “home,” the interconnected relationships and cultural significance that make a place meaningful.
This is where elemental wisdom is key: we must pause (Ether), reflect on our society’s limitations (Water), welcome the exchange of new ideas through honest dialogue (Air), and transform artificial intelligence for a more just world (Fire).
One way to do this, according to Yunkaporta’s Indigenous protocols, is calling in and calling out. “Calling in isn’t about passively inviting others into your space or allowing them to bring in disruptive narratives. It’s about creating a space for genuine dialogue and riding the waves together in good faith. Calling in involves setting boundaries and ensuring that the space isn’t filled with harmful narratives hidden beneath the surface,” says Yunkaporta. This involves not only calling out the bad behavior, but also addressing it constructively. Meta was recently called out for some of its AI-generated content pushing false narratives across its platform.
For Yunkaporta, the dangers of AI are rooted in its failure to understand the concept of home—and it is why we must ground it in kinship (Earth). “Home is at the center of what it is to be human,” he says, referring to the interconnected and interdependent relationships between all aspects of life. From an Aboriginal perspective, the algorithm’s inability to understand our kinship with all ecosystems makes it homeless, incapable, in its current guise, of being anything other than extractive because that is all that it is programmed to understand.
To further show the limitations of AI in its current form, I created a thought experiment for a recent artificial intelligence event: Gaia, an AI, is tasked with transitioning London to clean energy sources like solar and wind power. However, Gaia lacks the understanding of “home,” the interconnected relationships and cultural significance that make a place meaningful. Gaia might prioritize efficient and immediate power needs, opting for quick fixes like temporary energy sources that disrupt local communities or ecosystems. In doing so, Gaia overlooks the long-term impacts on residents’ sense of place, community well-being, and environmental health, demonstrating its failure to grasp the deeper, interconnected concept of “home.”
“Everything is entangled with that idea of home,” says Yunkaporta. “An AI will not be able to replicate human communication or thought without not only understanding what we think home is, or what home is for real beings in the world, flesh and blood beings, but they would also have to have an authentic identity that was situated in a home.”
As we delve into the implications of AI, it becomes clear that—without the transformative power of traditional knowledge and elemental wisdom—today’s systems risk reinforcing hierarchies that benefit those who uphold dominant narratives. The solution, Yunkaporta says, lies in the nature of the questions that need to be asked to those who are closer to the axis of power. “Look at everything, all your relationships and your data analysis. And your design and everything,” he says. The lessons that nature—and its elements—can teach us must be central to the new worlds we are building.
The elemental approach to asking deep questions exemplifies how ancestral and Indigenous protocols can guide us into creating a more holistic, equitable, sustainable, and even possibly regenerative world. It emphasizes the interconnectedness and power redistribution through collaborative design, integrating Indigenous kinship protocols. Ether, Air, Fire, Water, and Earth become metaphors to foster holistic questioning, ensuring technological innovations respect and enhance our relationships with all life forms.
Shaped by these elemental principles, Yunkaporta’s insights suggest that we should look towards Queer, Non-binary, and Two-Spirit communities to lead emerging systems, innovations and technologies. Our innate inability to exist beyond binary spaces makes us ideal candidates to empathetically and compassionately transcend the status quo. By embracing these Indigenous and ancestral approaches, we can create a more equitable and regenerative future; one in which AI can challenge us to consider more deeply, and with greater compassion, “What type of Ancestor do you want to be?”
Cyborg Shamanism and the Case for Elemental AI