It continues to be interesting times on the artificial intelligence front, at turns fascinating, amusing, and dispiriting. Though I remain skeptical of “the singularity is near!” hype, from either the eager or the fearful, I don’t at all feel like I know where this stuff will end up.
I do, however, feel compelled to make one point, an extension of my previous discussion on the topic (Why chatbots are roleplaying as ‘rogue AI’ from bad sci-fi): the generative AIs we are seeing are not the hyper-accurate savants science fiction predicted.
I read and watched a lot of sci-fi about AI while I was a researcher on the AI Policy Futures Project, and mostly AIs (and robots and androids) are depicted as being, at their core, calculators and reference texts. Like less sentient computers, fictional AIs have access to a great deal of information and can do complex math with ease. You ask them the odds of successfully navigating an astroid field and they’ll tell you “approximately 3,720 to one.” They perform their duties quickly and efficiently, and they rarely, when functioning normally, deliver information that is flat wrong. It is from that baseline of cold, logical, machine-precision that they then try to develop personalities, manners, and an ability to “compute” human concepts like humor and love.
Data from Star Trek: The Next Generation is the probably the best example of this archetype. Data is naive when it comes to many aspects of human life, and he struggles with some forms of self expression. When it comes to booksmarts, however, he’s always the best resource in the room. The crew relies on him much the way they rely on the ship’s computer.
The AIs we are currently building are not like this. The GPTs of the world don’t “search their databases” the way we imagine Data does when he stands slightly rigid and gazes into space. They are their databases. Instead of reaching into those databases to pluck out an obscure but relevant fact, they crunch their databases together to generate texts (answers) that feel likely to follow prompt texts (questions). Sometimes those answers contain the fact you were seeking, sometimes not.
As a result generative large language models (apparently now called golems) will regularly “hallucinate.” This is the established euphemism for delivering answers that look correct but are actually pure fantasy: made up facts, quotes, scientific papers, legal cases, and so on. They will often even confidently defend their responses, gaslighting users who call them out for being wrong. They have been trained (or groomed to use the parlance of one of my stories) to answer with a veneer of relaxed professionalism, but that human affect only makes it harder to spot their hallucinations.
This phenomenon is well known, but a certain class of early adopter midwit has nonetheless decided to use ChatGPT the way characters in Star Trek use Data or the ship’s computer: simply asking it questions and trusting, blindly, in the answers. Thus we have lawyers citing imaginary caselaw, marketers calling people to confirm fabricated quotes, and students referencing nonexistent sources in their plagiarized papers. This has been thoroughly and entertainingly covered by folks like
, who describes chatbots as generating “fact-like vibe objects.”This situation is quite the reverse of what science fiction (generally, with exceptions) promised us. It’s a cliche that robots in sci-fi will say things like “Beep Love Does Not Compute Beep.” ChatGPT will never say love doesn’t compute. ChatGPT can generate tons of eloquent-sounding answers to the prompts about love, loss, and the meaning of life. But on the other hand if you ask ChatGPT to write you an essay about some niche scientific or legal topic, the papers it cites might be real, but they might not.
TNG would be a very different show if Data was actually a dumbass, or a compulsive liar, or given to unpredictable flights of fancy. His quest for humanity would not be so charming. He would not function as a vital and effective member of the Enterprise crew. In fact he might not have ever left his creator’s lab, as it’s hard to see how such an unreliable and untrustworthy being could be accepted into Federation society.
But the Data-archetype is deeply embedded into our collective imaginaries around AI. So is the archetype of less benevolent AIs like SkyNet or the antagonist of that 2004 Will Smith I, Robot movie, whose rationality and inhumanity make them turn on their irrational creators. As I’ve said before, sci-fi writers made great metaphorical use of AI through the whole 20th century. But in doing so the genre set up expectations that now are proving to be counterproductive.
Sci-fi promised us that AIs would be emotionally stunted but accurate, even all-knowing, and the task would be reconciling their cold calculus to humanity’s social nuance and instinctual complexity. The real task is figuring out how best to manage this technology that is powerful but——for now, and possibly, at its core, forever——fundamentally untrustworthy.
Mostly I agree with the brilliant Ted Chiang, who argues that AI is going to be an excuse for corporations and capital to cut costs (workers) and squeeze labor, temporarily bumping profits while enshittifying everything it touches.
I’ve already gotten a taste of this in our recent search for new housing back in Arizona. Twice this week I reached out to apartment complexes and got replies within moments, despite it being 2AM in Mountain Time. I was surprised until I saw “Replies may be AI or human generated" lurking in the email signature.
Now, I certainly like a quick response, and these were more than a “thanks for your interest” autoreply. They included a price and an invitation to schedule a tour. The problem was, the price it offered me was a couple hundred bucks more than the units I’d been looking at. When I wrote back to ask about the cheaper units I’d seen listed, all it could do was repeat the first price. It was similarly unable to understand that I would not be able to take a tour because I was out of the country, but still wanted information on how to proceed. I could make no conversational headway. Eventually I just told it to “put your human on when they wake up.”
I can’t imagine this is ideal for either party. After all, a human could have just answered a couple questions and then possibly wheedled an application fee out of me. But if the cost of employing a human leasing assistant is more than the revenue the company will lose from having a shitty customer service experience…
I told my friend Jay about the experience and he asked how it felt, interacting with an account that was half-bot. Bad! It’s an extension of the feeling I get trying to fill out rental applications that, despite being online, aren’t set up to handle international phone numbers or any of the other nuances of someone applying from abroad. It’s painful enough trying to conduct this search from abroad without having to deal with these glorified web forms that, by pretending to be real people, I’m semi-obligated to be polite to.
Housing——and pretty much every sector of human life——is full of peculiar but benign circumstances that need just a bit of human parsing. So much of what works about our society is built on a combination of finesse and trust. At the moment AIs can’t be be counted on for either.
Catch Me at Eurocon2023 and in Stockholm!
This coming week and weekend, C and I be attending the SFF convention Eurocon, which this year is being held (conveniently for us) here in Sweden, in Uppsala. We haven’t had the chance to attend many cons before——the pandemic swooped in right as we were planning to make it a priority——so we are pretty excited to be among our people. If you happen to be attending Eurocon this year, please do say hi!
I will be participating in four panels: “Climate Fiction,” “AI, robots, and identity,” “Future Power - Energy in SF,” and “Working with the Future.” I will also be moderating a panel titled “Hopepunk as a speculative subgenre.” Quite a full dance card!
After Eurocon I will be staying down south for a couple days to give a talk at the Stockholm Resilience Center titled “Solarpunk: Planting the Seeds of Countercultural Sustainability.” It’s going to be a busy couple weeks!
Reviews, Press Clips, Etc.
In her monthly short fiction roundup, the wonderful and thoughtful writer Maria Haskins had very nice things to say about my story “Any Percent.”
What an amazing roller coaster ride of a story this is… I've read a lot of great stories that deal with games and virtual reality and how they can intersect and affect our lives, and Hudson's story takes some interesting and unique twists and turns along the way, spinning a multi-faceted and deeply thought-provoking tale.
“Any Percent” also got a nice write-up in this roundup by Tar Vol.
Our Shared Storm was featured in this very robust list of solarpunk books.
Art Tour: The World Is Ours
Great graffiti we spotted on a recent visit to Malmö. Love a good garf.
Material Reality: Midnight Sun
Before we came to Sweden, we were wary of the long nights and short days in winter, when the sun only shows its face for an hour or less. I’d visited Finland very close to the winter solstice some years ago, so I’d gotten a glimpse of that. It hadn’t bothered me then, but I’d been surrounded by the hygge comfort of a family friend’s lovely home. We worried that, even arriving in February, over the course of weeks the lack of sun would induce SADs or some other brand of emotional or physical downturn. So C and I took the danger of the darkness seriously, even considered bringing one of those sun lights.
The first few weeks here weren’t easy, but the long nights didn’t bother me. In fact it was kind of nice to fall asleep at 9pm. Turns out I’m a creature who thrives in darkness.
So maybe it shouldn’t have been a surprise to discover that it’s actually the long days of mid-summer that have gotten to me. It’s 11pm as I write this, and the sun has just kinda sorta gone down, leaving the sky a thin, gray-white that never really fades to black. In a few hours, around 3am, it will be as bright out as a summertime 8am looks in Arizona.
C and I have both had off-and-on insomnia for the last few weeks. For a while I was suffering strange midday burnouts, with a woozy discomfort bubbling up behind my eyes. I understand much better the sunny, dislocated horror of the movie Midsommar (2019).
Getting a sleep mask helped a lot, but I still have trouble feeling tired before midnight——long gone are those February nights of feeling dragged to dream at 9pm. And I still often wake up at 3am feeling rearing to go. If I don’t make myself go back to sleep right away, it’s easy to lose a couple hours to restlessness. We’ve also had to get ear plugs, because the midnight sun keeps the birds chirping all night, not to mention the people.
Many of the Swedes I talk to here love the summer, and love the long days, and feel privileged to get to experience the midnight sun. And I do feel a bit giddy to see, in a few weeks, just how long the day can get up here. Like the cold, I imagine it’s something one can get used to. Or maybe it’s like the Arizona summer heat——a novelty that gets more grueling every year. Probably it depends on the person.
My big takeaway is how hard it is to keep track of time when the sun refuses to set. The hours just glide past, and one must depend on clocks and gurgling stomachs to remember to eat, to head home from the office, to move from the couch to the bed. We are such creatures of our environment, no different than animals, or flowers turning to face the sun.
If you like the this newsletter, consider subscribing or checking out my recent climate fiction novel Our Shared Storm, which Publisher’s Weekly called “deeply affecting” and “a thoughtful, rigorous exploration of climate action.”