Reclaiming Techno-Optimism
Last Updated: 2025-01-22 00:00:00 -0600
It’s sort of the vogue right now to be a techno-pessimist, and to be honest, the position has appeal. It’s hard to look at what is currently being lauded as grand technical achievement, now that we’re nearly done the first quarter of the 21st Century. It’s even harder to satirize the idea - I came up with the phrase “AI-driven air freshener for gamers’ basements” and then found out that GameScent Exists. So much of the technology of the contemporary era comes to us like Jurrassic Park - we concern ourselves with Cans rather than Shoulds. We get everything from the dangerous (radiology-interpreting “AI” solutions that misidentify cancers) to the pointless (AI-driven lapel pin that gives no functionality the smart device you already carry doesn’t) to the dangerously pointless (Space-X catching its booster stage with the launchpad rather than having a seperate landing pad). But I don’t think it would come as a galloping shock to anyone to say that I - or most engineers I know - came into the tech industry motivated by techno-optimism. By and large, the industry is full people who think they’re helping. So how did it get this way, and how can we reverse the trend?
The Case Against Techno-Optimism
There’s a lot of good reasons that the contemporary attitude toward a lot of technical “progress” at the moment skews negative, but a good reduction of the argument is available to us: Most technical progress right now isn’t progress. You wouldn’t know that from the sales pitch, of course, but that’s to be expected. Nobody who invents a stevadore-bankrupter machine is going to present it to longshoremen as that kind of device. Shipping companies and port authorities get excited about the “labour saving” automations they are deploying into their harbours and promote their safety benefits to the relevant unions, but the unions know that you can’t get hurt if you’re not working, right? So too with Generative AI: the Creative Professional Bankrupter Machine. Even laying aside the not-at-all ignorable ethical issues with the training of a lot of these models, the fact remains that when an engineering department sees a round of layoffs and then is told that the labour savings from their new Copilot licenses is all that saved the rest of them, they know how to read the writing on the wall. Novelists who ply their craft out of the genuine interest of it are buried under an influx of GAN-generated penny dreadfuls that choke out the submission queues of traditional publishers and water down the more inovative funding streams of programs like Kindle Unlimited until there’s essentially no point in the profession’’ anymore.
And that’s just the AI component of it. I could do a book on examples of this phenomonology just over the last 5 years or so. Name an industry and there’s an example to be found of a technological “solution” addressing a problem nobody has in ways nobody wants.
So that’s techno-pessimism: everything sucks and nothing invented since (name the release date of the poster’s favourite games console) has been worthy of attention or investment.
The Problem Isn’t the Technology, Necessarily
Most of the technologies I just described suck, but they suck for a reason: money. Generative AI ranks high on the list at the moment because it’s in vogue, but also because it’s especially egrigious: people who know a form of specialized knowledge and who had performed labour using that knowledge have had the product of that labour gobbled en-masse without compensation, and a mechanical turk has been constructed to churn out ambigiously-legal replicas of their labour. Universally, by proponents, this is framed as one of two things:
- The “democratization” of the capability, or;
- As a general cost savings.
Usually this is framed not as “we can employ fewer artists/coders/writers” but as “this will enable 10 engineers to perform the development work of 15”. In other words, if you give Johnny’s House of AI $30k/year in licensing fees, you can employ $300k/year fewer engineers.
Sidebar: Democratizing Capability
Personally I always found the “democratizing” argument spurious, for a number of reasons. Firstly, these generative AIs are still pretty bad at whatever it is they do. You gotta remember, my code is included in the training data for Copilot, so it must spit out some pretty low-quality python. And everyone knows that the Fairy Rule applies to art coming out of a GAN: make sure it has the right number of eyes, ears, fingers, teeth, and so on. But that’s also sort of the problem: everyone supposes it’ll get better, and personally I see no reason to doubt it.
The problem is that you shouldn’t be trying to democratize the ability to write code - you should be trying to democratize the ability to learn to write code well. The same goes for art, music production, and so on. The reason collective human enterprise works is because people are able to develop expertise in things that interest them and push the boundaries of their art forms or knowledge bases. I don’t want to go too far into why that’s not necessarily going to be possible with AI - the “suchness” of AI is something I could do a few thousand words on and still just scratch the surface of the problem.
Go back to the example of AI-generated code. I, as an experienced software engineer, could be reasonably comfortable using a GAN to spit out some python or C. These are languages I know well and could find the pitfalls in certain approaches the AI could take. I’d be capable of proofing the code - effectively acting like a lead engineer over an AI “junior”. There’s just one problem: writing the code is itself the fun part. If you reduce the job to “human unit tester”, well… you know, we’ve had test-driven development as a methodology for years and we’re still generating some hilariously embarassing bugs. That problem only gets worse the more and more rely on code that isn’t being reviewed by experts. And if you need the expertise to do the review, well… what have you democratized, exactly?
Back to the Money
A lot of the best examples of technology we’re embarassed by come down to “done for profitability”. If I was writing this article four years ago, instead of citing the explosive rise of generative AI and its displacing effect on a whole lot of skilled labour, I’d be talking about how your racist uncle and paranoid grandmother were right, and how Big Data is out there knowing you better than you know yourself, for the express purpose of seperating you from as much free will as possible, especially where your politics and your wallet are concerned. Ten years before that, it would have been Planned Obsolescence, which is actually a subject of another upcoming article.
We treat our present economic model like it’s the only actual way to organize resources in a human society, in part because it’s the economic model used, by one degree or another, by basically every nation on earth. We also take nations like a thing that must necessarily exist. These are both very old and intractable problems that certainly aren’t going to be solved in a single blog post hosted on a random nginx container in a corner of the internet where very few people go.
But the fact of the matter remains this: it’s the money tied to a technology that determines the amount of harm it can do, because it helps to determine the scale. I owned bitcoin back in like 2009, when the total value of the largest exchanges was about the price of an office pizza-party. At the time, nobody was interested in considering the very real problems with the underpinning technology of a digital currency being a block-chain. Computational efficiency wasn’t the problem we were solving for. So to with the way we got to Generative AI. Nobody working on the problem - nobody getting paid well to work on the problem, anyway - was thinking of trying to solve the optimist’s problem AI would address.
An adage of a sort has arisen in the wake of generative AI: “I wanted automation to check the email and read the TPS reports so that I had more time for writing and movies, not the other way around”.
Money, really, is the root cause of the harm, rather than the technology. Labour-savings was always meant to be the point of technologies right from the get-go. Enabling humans to reach their higher potential. The problem is, in an economic system as currently rendered, no resources are allocated to those who do no work (unless they happen to be already wealthy). I don’t want to spend all day right-sizing Kubernetes deployments! Well-crafted automation could totally safely handle that responsibility. But for damn sure, I’d rather right-size Kube deployments all day and be able to eat come supper time, than the alternative.
Of course, unemployment from automation is just one narrow subset of the problems techno-pessimists have with modern technologies. While money is at the root of all of these problems, there’s one other I want to mention responsible for real, direct harms: the question of scale.
It’s About the Environment, Stupid.
There’s a deeper problem here that I find to be the sort of “true final boss” of the contemporary fascination with LLMs and Big Data, and it goes something like this:
- A human artisan spends three weeks preparing a single page which contains a well-lettered and highly ornamented english translation of the entire Heart of Perfect Wisdom Sutra (the Prajna Paramita Sutra). At the end of the three weeks, a physical object exists. The human has consumed no more water, electricity, or sundry resources than they otherwise would have, except perhaps some ink and paper. Training the human artisan effectively cost no more or less in either economic nor material terms than any other lifestyle the human could have undertaken.
- An AI model is tasked to interpret the same text prompt (generate an image of the Prajna Paramita Sutra, illuminated in the fashion of a medieval european manuscript). Training the model previously cost on the order of $10 Million USD and may have crossed ethical boundaries with the access and usage of data (depending on how the training data is gathered; the more ethically it was gathered, the necessarily-higher the training cost will be). The energy generation costs (the carbon footprint and water consumption for which must be amortized across every request made to the model) are enormous. The ongoing energy and water consumption of the data center the model is running in are also enormous.
That doesn’t necessarily mean there’s never a good use for an AI model. For one thing, the training of the model is the most energy-intensive operation the model is ever likely to perform. I should know. We’ve been using machine learning models at my day job for years even before the hype train came along, and the difference between the resources needed to train a new version of our model and the resources needed to deploy the user-accessible container that actually uses it to solve problems are shocking.
The barely-a-coaster-toaster (do people still use that term now that laptops don’t have optical drives) laptop under my desk that hosts this website and all my smaller servicelets could run a trained AI model.
As always, the issues with this specific technology are the problems of scale. A single artist who has trained a GAN on their own work and runs that model on their own laptop as a drafting and ideation tool isn’t doing anything much more wrong than any other kind of digital artist. A million daily users hammering Stable Diffusion with “show me a rabbit eating from a Strawberry Tree” are contributing to the burndown of the world. So to with cars, space flight, regular flight, the whole shebang. If you solved the energy problems the technology problems aren’t there anymore. Just like the broader automation issue: if you solve the “unemployed people can’t buy things or pay for their housing” problem, you solve the loudest argument against automation.
You think I want to sit here all day micromanaging someone else’s kubernetes cluster? Buddy, I don’t even want to have to micromanage my own kubernetes cluster.
The Problem Was Never the Technology
By and large, techno-pessimists like to point at a technology and say “technology bad”. I do it too, we all do it. I think it’s a combined symptom of change fatigue and aging. There’s a reason your mother in law suddenly can’t compute even though she had a clerical job for 20 years that involved daily use of the computer. The paradigm of computing is constantly being pushed in the direction of novelty.
However, if you take just a few minutes to dig a little further into the problem with any given technology, you’ll get a Scooby-Doo effect where the issue that appeared to be a Scary Technology Problem is invariably the mundane evil of a Boring Social Problem. It’s easy to rail against GANs and easier to vote with your feet: I won’t use tools like stable diffusion and write a few emails a week to anyone that seems relevant at work about how Copilot is eventually going to lead to either a lasting outage or a security breach.
It’s a lot harder to actually look for - and then act against - the underlying social problems. I don’t have a good solution in my back pocket for “the main reason to automate artistic endeavour is to finally force everyone into the salaried workforce”. Or, rather, I do (it’s Universal Basic Income) but I don’t have a solution to “evidence has never mattered in politics and people will vote with their guts, and their guts say UBI won’t work in spite of all available studies pointing to the exact opposite”. I don’t have a good solution to “the exact same people who told us you wouldn’t download a car are downloading all these people’s cars to drive them out of business”. The asymetric power balance of capitalism is a feature rather than a bug. I don’t have a solution to that.
So it’s way easier to rail on about ethics and feign outrage that the AI tool is being used at all, rather than address the underlying problems that made the AI tool economical in the first place.
That’s Not Very Optimistic
It’s not, but it’s more… socio-pessimistic than technopessimistic. There is a promise to be found in technology. The whole concept of technology is a nod to that promise. Harnessing fire allowed some ancient pre-human species access to more bioavailable proteins and fewer dietary infections. Human cereberal capacity and complexity increased. Homo sapiens arose.
Then some goofy bastard figured out that those fun pigments left marks on surfaces that stayed for a really long time. Then some other, goofier bastard figured out that you could treat your drawings abstractly and use them to convey concrete ideas, and writing was born. Someone else figured out you could put part of the fruits and the vegetables back in the ground - parts we couldn’t really eat all that good anyway - and in a season or two you’d have more of that food again in a predictable location. Wolves domesticated us into treating them like dogs.
Millenia later, Wizards are etching strange patterns onto sheets of poisonous glass, and conducting bottled lightning through them, and that makes the lights on your own panel of poisoned glass dance in ways that are pleasing to your human brain. In a lot of ways, this is a dangerous development. Tailored advertising is a step short of brainwashing, and a lot of us have been using the Ad Driven portion of the internet for so long that the ad-sculpting is extremely precise.
But the same technology that lets bad men exploit our base emotions like that lets good folks tell beautiful stories that have never been told this way before. Yes, the discovery of nuclear fission gave us the atomic bomb. It also gave us nuclear power - the promise of a reliable and stable energy source that won’t slowly steam us all to death. Nuclear medicine, which is making treatable conditions that were barely diagnosable a century ago, and are now routinely survivable.
The problem is never the technology, or nearly never. It’s invariably the people weilding the technology. And while those problems are hard to solve, they’re not impossible. In some sense, that’s the true work of the artist. The part that could never be fully automated away; the public social conscience.
If you wanted to show your support financially for Arcana Labs projects like PETI, but don’t need a virtual pet development kit, your best avenue is through the pathways detailed on our support page.