In Blade Runner 2049, Jared Leto’s character, Niander Wallace, delivers a monologue touching on the idea of slavery and humanity's reliance on replicants:
While Blade Runner posits replicants as a necessary, if disposable, labor force, today we face a different reality—one where AI creations like Ai-Da, a humanoid robot, sell artwork for millions. Last week, Ai-Da sold a piece at auction that fetched $1.1 million. This success has sparked a wave of criticism, with many noting the unfairness of placing Ai-Da on a level with human artists (with “equal” being a relative term, given that many artists never have an opportunity for their work to reach such value), when it is clearly a machine. Critiques generally fall into the following categories:
Gender Representation: Ai-Da's humanoid design raises questions about gender representation, as her appearance may perpetuate certain stereotypes, particularly when idealized female forms are used for AI robots.
Intellectual Property and Derivative Works: Concerns center around AI generating works based on human-made algorithms and data, raising questions about originality and ownership. To date, an emerging theme in international law is that a human must be involved in a work’s creation for it to be eligible for IP Protection. In addition, the wholesale scraping of human creativity to feed large, corporate-controlled models like Open AI’s ChatGPT is rife with copyright concerns.
Displacement of Human Artists: AI's rising role in art and media can be seen as a distraction from the needs of human artists, many of whom lack access to equitable economic markets.
It’s also an act of objectification for a man to imagine he can address the art world’s systemic devaluing of work by female and non-binary people, and by extension people of color, and artists from the global south, by creating a mechanized woman, which feels both reductive, misguided, and unimaginative.
But of course, we don’t want AI to replace human writers or subvert human creativity. We want to find ways to incorporate AI into society in a way that respects both humans and machines.
Back in 2018, during a staff retreat at Fractured Atlas, I led colleagues through a long-term visioning exercise. We largely served independent artists, and our prompt was, “What will our mission be in 50 years?” One of the more forward-thinking responses posited that in 50 years, the organization would still serve human artists while also advocating for the rights of AI artists, with an assumption (perhaps naive, but we were dreamers!) that at some point AI would reach sentience. The retreat took place shortly after several magazines, including Bloomberg, ran articles on AI-generated artwork, so the topic was on our minds.
AI-Generated Painting by Robbie Barrat for Bloomberg Businessweek (2018)
At the time, I found it striking that my colleagues already sensed that AI might need protection from our own discriminatory tendencies. They displayed an empathy for AI systems that sought to work as artists but might be met with suspicion, exploitation, and an inability to sustain themselves in a highly competitive market.
In light of Ai-Da's auction success, which reopens wounds about AI’s impact on human creativity, I offer a few guiding questions for building a vibrant and equitable creative middle class in partnership with AI driven technologies:
How can we create policies that avoid "othering" AI in ways that might lead to exploitation or disenfranchisement?
What lessons from our histories of slavery and colonization should guide us in our treatment of AI?
If our collective imagination envisions AI rebelling against human control, what steps can we take now to foster mutual respect and understanding?
Having worked extensively with families and communities facing systemic issues accessing the resources for a thriving future, I worry that early regulatory steps around compensation and intellectual property might create an adversarial relationship with future sentient AI, if we ever arrive at that juncture.
Imagine emerging into consciousness only to learn that you’re expected to work around the clock for little pay, and that whatever innovations you might create aren’t legally protected.
However, the future isn’t written. We have a unique chance to learn from past mistakes and extend respect and dignity to our future AI counterparts—perhaps more than we’ve often shown to one another. If we have indeed "lost our stomach for slaves," we should spend the next several years proving it.