We're Giving Away our Humanity
A Smattering of fears on AI and the Soul of the Species
This is a slight interruption of regular Realms of Roushness - a short story is forthcoming later this month.
I’ve been simmering on these thoughts for some time. I don’t typically write non-fiction because I need what I say to feel authentic and mildly new. Also, I pre-apologize to my journalism professors because this is going to be more “opinion” and “flighty thought” than a well-structured article. As always, I’ve buried the lede somewhere.
But here goes.
AI has been on everyone’s mind lately, in some shape or form. Some peeps excited and optimistic. Some are rubbing their hands like little mammals, thinking of all the money they can make. And others are lying awake at night, wondering when they will be replaced.
I’m in the lattermost camp. I’m truly afraid of our willingness to dehumanize our world. As a writer, it’s especially terrifying. The internet is already full people churning out words on the cheap.
ChatGPT and others like it will make words even cheaper. Enabling humans to fill the world with half-baked thoughts. It’s already happening. You may have heard how Clarkesworld, a literary sci-fi magazine (publish me please) had to close their submissions because of a GREAT FLOOD of AI-written stories. (Also, whoever did this, I propose you imagine a middle finger pointed at you.)
And I’ve seen it in my work life already. My boss has asked me to edit content written by AI. The content was particularly bland and needed an entire rewrite. I’ll admit, though, it did prove useful in starting an article. But just that.
sidenote: [this job assignment caused a mental spiral where I wondered who would read all the BS content AI makes, then I realized we’d need AI to read the AI articles and then suddenly no one’s reading any marketing materials and boom there you go all the writers are fired]
I’m not here just to lament how AI is changing the game for writers. I’m here to discuss the spirit of giving things away for the sake of ease, comfort, and financial gain. I’m here to talk about the future.
The Two Extreme Futures (Terrified rant, heavy-breathing) wrote a "day in the life" story about a future where our lives are fully-integrated with AI. The writing's perspective is positive and hopeful for the union of advanced technologies and human existence.
If administrative, governmental, and menial daily tasks are handled by autonomous systems, I can see how our lives become more free. We have more time to be tactile and creative. We have energy to invest in our families. We can, in a way, rediscover our humanity because we’re free of the things we have to do.
But is it really so good to have an entire society that can just do what it wants? Is it good for work and labor to not have any weight to it? I fear the answers to those questions are not simply yes or no. I also fear that the beautiful future AI could provide is so far away, nigh-on impossible. Also, I have serious doubts about our collective ability to integrate with AI with equity, thoughtfulness, and patience.
To misquote my favorite philosopher, Peter Rollins, “To get to utopia, you have to free yourself from utopia.” Meaning, we can’t strive for a perfect world. There will always be people the world isn’t perfect for. So how do we find that beautiful world?
Here’s other questions I think about:
Who will be displaced by AI?
Are we going to make AI accessible for all? Or just the few?
Are we augmenting our human abilities or replacing them?
In short, how do we keep ourselves in control enough of AI development to inspire a beautiful future while addressing and minimizing the harm that will come about as we get there?
The more we rely on AI, the more we rely on it. That is, to say, offloading human-centric tasks will create a poverty we haven’t faced before. And I’m not talking in merely economic terms. When people get displaced, there might be new opportunities to find because of AI, but will there be enough work for even a fraction of those people to pay the bills?
This reminds me of one of Andrew Yang’s key campaign points when he ran for president. He wanted to help freight truck drivers because he claims they’re going to be replaced very soon by self-driving vehicles. He wanted to create programs and support systems for these drivers.
These are the things we need to be talking about. These are the solutions we need to build, now. We don’t have to do a Google search to understand how corporations and governments don’t make preventive steps to protect their workers, constituents, and citizens.
We need to create a future that lives somewhere between the two extremes of AI apocalypse and integrated utopia. Somewhere weird and altogether more close to our current day and age than anything else. That’s why it’s so important to listen to the super smart people writing, imagining, and warning us about the ways we can go wrong and right. It wasn’t long ago AI such as the ones we’re seeing today were far fetched (but you betcha it was in a sci-fi book!)
Another Angle, Creativity (60% Less Ranty)
“Driving stick is a creative exercise,” Van Neistat says in his video. And this perfectly sums up my feelings for the manual, tactile world. Everyone should have access to a life that connects them to nature, their home, their food, and the world around them. Everyone should be able to “go manual.”
As a child, I was lucky to get sent to my grandparent’s place in Indiana for the summers. I was a purebred suburbanite afraid of bugs and all manner of things. And staying with my grandparents challenged my comfort zone.
My grandpa had me help him in his garden, getting down in the dirt to pull weeds and plant seeds. He had me run the rototiller and wood-splitter. My grandma let me make pies and help her in the garden. I played outside in the humid heat, wandered the forests a bit, and learned how to catch frogs. A good-old American childhood.
Despite those idyllic times in Indiana, I often feel I missed out on the time before computers. When watches had to be wound. When it was expected that people have more knowledge about how to take care of the things they bought and owned. Also, when everything was written by typewriters (after repairing one, I fell in love with it).
In my visions of a utopian future, the world looks a lot more analog. We have fewer devices that do everything and have returned (somehow) to having specific machines and tools for the pure beauty and aesthetic and functionality they provide.
I propose that the most beautiful future we can have with technology, is that we don’t replace everyone and everything we can. We establish hard boundaries. We enforce those boundaries with science and regulations.
We keep humans in jobs where humans benefit (measured by non-economical scales). We keep humans at the center of a job if replacing them guts an industry. And if something is deemed better for society to have AI / robots / what have you do it, then we help humans retrain and reeducate.
Not all work is creative exercise. A lot of work sucks, to put it bluntly. Like do I want to write “ten reasons to do X while in X” for a job ever again? No. Do I want AI to do it? NO!
To have work, good work, that connects us to other people and provides a sense of value is good. We shouldn’t steal that from ourselves for ease and affordability.
But who am I kidding. We’re going to do it. We’re all playing with ChatGPT (or 4 or whatever) because we’re like cave people with fire again. We will always be cave people with fire. No one’s going to make obsolescence a key value.
When the robot overlords finally take over, they’ll bring this article up at my hearing. Then they’ll digitize my brain for eternal torture. Force me to live in a world where all sinks have faucets that are too short so my knuckles hit the back of the bowl. (Is there a truer hell?)
Ok. I’m done now. My tired cave brain doesn’t have any more fear pellets. But I do have these sort of uncategorized dead-end thoughts:
I’m super comfortable giving AI supervision over resource allocation and processing macro data to understand things like “how can we create fairer, more safe societies with data?” or “how can we create economies that are not driven by goals of limitless growth?” I found Isaac Asimov’s vision for this in his book I, Robot incredibly cool and potentially hopeful.
AI are going to be sentient, or at least we will be able to believe they are, in the near future. It’s going to happen. We will not be ready for it to happen. We will also not be ready for sentient machines to be normal among society.
If you’d like to explore AI and society in a creative aspect, I’d recommend Ted Chiang and Ken Liu - two sci-fi powerhouse writers. Their short fiction often deals with AI in some manner. Ted Chiang recently penned an incredible essay about ChatGPT for the New Yorker. It’s worth a deep-dive.
Even better than anything I could say is this essay by Dan Wang, “I regret this abstraction of the material world. Most of our living standards are tied to the world of atoms. Even when we spend a lot of time online and on our phones, we go to work in cars and subways; keep ourselves warm and cool using machinery and electricity; surround ourselves with objects that let us cook or relax; and on and on.” [post-recording add-on]
As always, thank you for reading and listening. Your time and energy is extremely precious and I appreciate that you spared some to read these words.
Tell me, what do you think? What should we do (or not do) about AI? How do we make a fair and beautiful future for humanity?
A short story is still on its way to you this month :)
Talk soon, Realm Walkers