Volume 5, Issue 81: A Hologram for the King
"The greatest use of a human was to be useful. Not to consume, not to watch, but to do something for someone else that improved their life, even for a few minutes.”
The book is out. People tend to like it, I think. I hope you have bought your copy. If you have not, there is no time like the present: Buy now. If you have already bought the book, you are encouraged to leave it a review on Goodreads or Amazon, or both. It helps.
Over my travels of the last month, I have had two separate moments in which I have caught myself going on some sort of rant—to perfectly nice people who just happened to mention offhandedly that they occasionally use ChatGPT—about the evils of AI. I was halfway through the rants before I even realized they were happening; they just sort of came out of me, like a reflex, or maybe a burp. Each of these people were nice, thoughtful people, and they were kinder and more polite in response to my rants than I probably deserved. But their overarching attitude, if I’m being honest, was one of a bemused pity, like they were watching a guy still insisting on watching all his movies on laserdisc. A guy who was about to be left behind.
Now, as I’ve gotten older, I have become more and more comfortable with not embracing what the current trends may be, perfectly happy to simply enjoy the things that I enjoy. I’m always the last person to see the new TV show, I still use a Yahoo account, I type everything in Microsoft Word, I keep a handy scorebook at baseball games. In many ways, I’ve been a 50-year-old man for a couple of decades now. And I do still believe, as I wrote here just a few weeks ago, that AI is corrosive piracy, as well as an abdication of what it means to be a real, live human being. I do not use it, and I am annoyed when it is forced upon me, and if that leaves me behind, well, so be it: I have survived nearly 50 years without requiring a computer simulation to organize my calendar or to tell me I’m handsome, so figure I can muddle through however many years I have left without one as well.
I am, however, not interested in being a scold. I do not want to be the drag at the dinner party, the guy a perpetual hair-trigger away from a 20-minute lecture just because you asked your phone for a good place to grab coffee in the morning. AI, whether I like it or not, is not going away, and there are more and more people—people I respect, people whose company I enjoy—integrating it into their life. So I think it’s important, if just for my own sanity, to lay down my personal 10 Rules of AI. When it can work, when it can’t, when it shouldn’t—what I find to be acceptable usages, and what I find unethical and potentially destructive. I’m writing this all down to clarify my own thinking and, frankly, so maybe I’ll stop being so annoying when someone brings up AI in conversation. Rather than go another rant, I’ll just read this to myself to calm down—or maybe ask a bot to read it for me.
Will Leitch’s 10 Rules of Productive, Ethical, Unproductive and Unethical AI Usage
Much of what we are calling “AI” is really just a calculator.
There’s an MLB Google Cloud AI commercial that plays constantly during baseball games. This is it:
This shows a whole bunch of stats, and Statcast metrics like Launch Angle and Hit Probability, all excellent statistics that help me learn more about a game I love. (I do not understand baseball fans who dislike new stats. They’re just a way to help understand the game we’re all watching. The game’s still the same, don’t worry!) But that’s all they are: Stats. There is no Artificial Intelligence element to this that I can see. You’re just computing distances and angles and speed. Because AI has become such a corporate buzzword, you’re seeing it used to promote functions that don’t have anything to do with AI, basic computing that we’ve been able to do for years. It’s important to note the difference, even if the people desperate to sell us things refuse to make one. I use calculators. There is nothing wrong with using calculators.
Using AI to do basic organizational tasks or fundamental coding is a perfectly reasonable use (but you probably better double-check it anyway).
I’m a member of the Atlanta Film Critics Circle, and we’re currently voting on our best films of the first 25 years of the 21st century. I’m on the vote-counting panel for the AFCC, and with every member listing their best 50 movies, it can get unwieldy. So one of my fellow vote counters is using AI to help tally the votes. I am not doing so—I find it sort of fun to put in every vote by hand, I can’t believe someone actually voted for Speed Racer—but I certainly don’t see anything wrong with plugging everything into AI to come up with a total. I know some small business owners who will use AI to put large groups of figures into charts or Excel documents, and that seems a reasonable use as well. But I still wonder if the juice is worth the squeeze on this, considering how much AI still seems to get wrong. If you have to go back and double-check everything, how much time are you actually saving rather than simply doing it yourself?There is value in the work. Value the work!
One thing I’ve noticed about people who constantly use ChatGPT is that they get progressively lazier about it the more they use it. At first they’re experimenting, maybe keeping a little bit of healthy skepticism, just sorta having a little fun. But that eventually fades away, to the point that they ultimately tend to just blindly accept whatever it says, without ever checking any actual sources, and use it for every task, even those for which is isn’t inherently designed. To outsource everything is to understand nothing. On a related note …AI should never replace actual thinking.
A few months back, I was involved in an NCAA Tournament calcutta auction, in which you bid on teams in the tourney and can win money depending on how far your team advances. There was one guy there who, every time a new team came up, would loudly say into his phone, “Hey Gemini, how much should I bid on [name of team]?” Not only was this annoying—and it was very annoying—it was pointless: Why even come to an auction like that if you’re going to just blindly rely on a bot? (I mean, if you win, do you even get to brag to your friends about it?) I realized, though, that this guy was consciously embracing not having to think—he saw the AI as a release of the burdensome weight of making decisions. I have been shocked by how many people tend to use ChatGPT for this purpose: Just tell me what to do. Relinquishing decision making, to give that control to a bot, is to relieve yourself of the consequences of your own actions—it’s a way to have something else to blame. Your life is yours. Live it.Much of AI’s value comes from stealing the work of human beings.
Sorry, but uh: This is undeniably 100 percent true and it should never, ever be forgotten. Humans create things, AI scans those things to give facsimiles of it and regurgitate it back, and the humans who made those original things have their work bastardized, without any actual credit or payment. AI cannot create. It can simply take what others have created. When you are having a bot make something for you, you are actively stealing. I’m sorry, but you are. (This is the part where I get scoldy.)We’re already shockingly lazy about AI accuracy.
A friend was telling me how, when he typed a question about his company into Google, it produced incorrect, potentially damaging information. Frustrated, he contacted a rep at Google to ask them to correct it. You know what they said? They said, “we know it’s wrong, but Gemini is going to get better. You just have to wait for it to get better.” And then they refused to correct it.
This is what AI advocates are always saying: Sure, it doesn’t work great now, but … it will! Someday! This is pretty galling, for several reasons. First off, you can correct it right now—you’re Google! This is what you’re here for! But even more than that, it’s self-serving, almost a Ponzi scheme in and of itself. If you don’t think a product works, well, that’s why you have to invest more in it now, so it will work better in the future. That may be true—though I’m not exactly sold on that either—but the fact remains, it’s wrong now. Fix it! The supposed promise of AI doesn’t just allow it to be shitty now, it actually encourages it. It requires us to accept a product that does not work and demands us not only to be happy about it, but to think of any complaints we might have as unimportant and even somehow standing in the way of progress. Wanting something to be correct—wanting it to work—makes you a Luddite.Companies are using AI to be cheap, not because they think it’s better.
This is an obvious corollary to the last one. I don’t think most CEOs and executives honestly believe AI will make their companies better or will improve their products at all. They just think it will reduce their workforces, which will allow them to increase their profits. It becomes a death spiral: The product gets worse, because there are fewer humans actually working on it, but the company makes more money, which then only discourages them from improving the product. This is the enshittification of everything, and AI is only accelerating it.Your AI chatbot is not actually your friend.
I will confess, I am still rattled by this Wired piece from last month about the writer who went on a couples retreat with two people and their AI chatbot significant others.I am willing to grant the possibility that chatbots can potentially be helpful for those who are deeply lonely, or perhaps have some sort of social anxiety disorder that makes human connection difficult. I also would argue that, well, human connection is difficult, which is one of the main reasons it’s so important and so meaningful when it happens. (This is another reason I’m always wary when someone tells me their best friend is their pet.) These artificial relationships are one-sided by nature; you are, after all, dealing with a robot whose only programming is to serve and to flatter you. Life is about connection, and compromise, and giving part of yourself to someone else, and having them trust you enough to do the same to you. You will never have a meaningful relationship with your blender.
If you use AI to write something for you, it is meaningless and we’d all be better off if you had never said anything in the first place.
This is the thing: Writing is meaning. The reason we write things is to express some sort of meaning, to pass along important information, to convey a human emotion or sensation. It doesn’t matter whether the writing is grammatically correct, or vividly expressed. It matters that it came from you. That is the point of writing. If you ask a Chatbot to write something for you, you are being fundamentally unhuman and foundationally dishonest. I am not going to bend on this one. If it requires an AI bot for you to express something to me, you and I are probably not actually friends.
And sorry, but: If I were your teacher and you used ChatGPT to write an essay for my class, I’d give you an F and try to get your ass thrown out of school. You are clearly not the least bit interested in education and therefore wasting both of our times. I would be a chill teacher about a lot of things. But not this.You only get to live this life once.
The world is a big, beautiful, terrifying, awesome place, to be taken in huge heaving gulps. There is magic everywhere. But you have to engage with it. You have to let it in. To see, to think, to create, to absorb, to connect … it is to be an active participant in it, not a casual, passive semi-observer. AI is supposed to make things easier. But, I’m sorry, it sure looks to me like it’s only making everything numb, and empty, and worse. It has its uses. But so does a blender. Or a chainsaw. Or a bomb.
Here is a numerical breakdown of all the things I wrote this week, in order of what I believe to be their quality.
The Best TV Show I’ve Seen in a Decade, The Washington Post. I finally wrote about “Adolescence.”
Have the Yankees Lost Their Juice? New York. Remember when the Yankees were the Dodgers?
Teams Who Have a Key Two Weeks Coming Up, MLB.com. Cardinals are one of these.
This Week’s Power Rankings, MLB.com. I actually get this weekend off these.
PODCASTS
Grierson & Leitch, we discussed “Superman” and did our annual mailbag show, which included the big Woody Allen discussion people have been wanting us to have for a while.
Morning Lineup, I did Monday’s and Friday’s show.
Seeing Red, Bernie Miklasz and I look at the Cardinals at the All-Star Break.
LONG STORY YOU SHOULD READ THIS MORNING … OF THE WEEK
“The Enshittification of American Power,” Henry Farrell and Abraham L. Newman, Wired. You may remember my piece from last year about “the enshittification of everything.” Well, that’s happening with all of America now. Wonderful.
Also, this Devin Gordon piece on a bullshit gambling influencer was a giddy read.
ONGOING LETTER-WRITING PROJECT!
This is your reminder that if you write me a letter and put it in the mail, I will respond to it with a letter of my own, and send that letter right to you! It really happens! Hundreds of satisfied customers!
Write me at:
Will Leitch
P.O. Box 48
Athens GA 30603
CURRENTLY LISTENING TO
“Enough,” Jeff Tweedy. There’s a Jeff Tweedy triple album coming out in September—this is a guy whose productivity I can admire—and they’ve previewed four tracks off it. This is my favorite one.
Remember to listen to The Official Will Leitch Newsletter Spotify Playlist, featuring every song ever mentioned in this section. Let this drive your listening, not the algorithm!
We had a fantastic time at Book Soup in Los Angeles this week, sans Jeff Garlin, but still a blast. I was a little blown away by the number of people who showed up. You are all so great. And the boys did like California.
Have a great weekend, all.
Best,
Will
One thing I forgot to mention that I think is good AI: Waymos. I've used them on both of my last visits to LA and think they're kind of amazing. And, on the whole, better drivers than most humans.
Number 10 sums up my feelings about AI.
Glad your boys liked California. When I first arrived in CA in 1988, I wasn’t sure I’d made the right choice. I’d moved to Sonoma County for a job all by myself. I had left West Virginia where I had lived and worked for 8 years. The economy was depressing, the attitude was depressing. it made me depressed.. So, after being offered the CA job twice I took it. It was so beautiful it took my breath away and 20 minutes to the west was the most beautiful coast and ocean views. On my first day driving to work, I turned on the radio and the drive time announcer said “good morning California , welcome to another day in paradise”. Yep, I was home. I still love so much about this state. We moved back here after 15 years in Georgia. Family is here. That is so important. And it is still paradise every single morning. Fate lead me here first time and it brought us back home a second time.