The one question to ask yourself about those viral AI articles
OK fine, the question you should ask is...
Hello Gobbledeers,
How’s it going? I’ve been at the eTail conference with a client this week, and I was pretty shocked how many people were there. (Shocked that there were so many, not shocked that there were so few. Just to clarify.). Also shocking was how little has changed since I first went to this conference 15 years ago - I led panels on Personalization and on A/B Testing, and I’m fairly certain that in 2010 I led panels on Personalization and A/B Testing. Though this time people said “AI” quite a bit.
In any case, because I was wrapped up in conferencetown, I was going to send out a Gobbledy Classic (tm) today. But over the past week or two a couple of articles about the future of AI were published that I wanted to talk about.
Specifically, I wanted to call out that whenever we read a “viral article” about AI, it is important to think about the motivations of the person writing that article. In a hype cycle, separate from whether there are underlying benefits to what is being hyped, the people doing the hyping tend to have a selfish reason for doing that hyping. That’s what I want to talk about today.
But first - the inaugural Gobbledy Messaging Cohort is next week (and full), but we’ll launch another cohort at the end of March. The GMC is comprised of 3 one-hour sessions with your cohort where we’ll discuss best practices for homepage messaging, do a few fun exercises, and share our new messaging based on best practices. Plus, you’ll get 2 30-minute 1:1 sessions with me to work through your messaging challenges. All for $500. What a bargain. If you’re interested, fill out this quick form and I’ll get back to you when the new dates are set.
OK, so tell me if this resonates with you:
I am intrigued by the AI tools we’ve seen released recently - especially Claude Code, which has re-awakened a long-latent interest in building stuff online. At some point I couldn’t keep up with coding and learning new coding skills, so I gave up. I tinkered where I could (mostly with Wordpress), but there was a limit to what I could actually create.
Until Claude Code. It is a remarkable tool - I was able to build a completely functional eCommerce site in about a day and a half for $20. This was an insane proposition 4 months ago, but now it can be done. A friend showed me a recipe website he built that was better than any other recipe site I’ve seen. It took him 3 days.
It is clear that AI tools will have a very real impact on the world around us. (Perhaps duh.)
I am telling you that because I cannot think of a time where I have struggled to balance two ideas: one is that something has very real implications for us, and the other is that the discourse around that something is so ridiculous.
First, I should just stop writing about OpenAI’s Sam Altman, because everything he says really pisses me off, but also he is such a master pitchman for his product that I can’t look away. Also, to clarify - his product is not ChatGPT; his product is “raising money for OpenAI.”
His product requires raising roughly a zillion dollars, and to raise a zillion dollars you need to envision a world that is unlike the world we have now. Because if you say, “we need more money to build a somewhat better thing than we have now,” there are limits to how much money people will give you, because the people who have that money have models for how to think about investing in things that are slightly better than the things we have now.
So he has to pitch a world that is unlike the world we live in now - which is why I mentioned OpenAI’s warning to investors that if the company is successful they may not see a return on their investment because money will no longer exist. Love it.
But Altman has also done a brilliant job of positioning - if you need to raise a zillion dollars, it’s going to be a tough case to make that you need a zillion dollars to compete with, say, Amazon. Because those investors will have some sort of model for how much it will cost to compete with Amazon.
Also, when you go to raise a zillion dollars - some of which will be used in ways that may - MAY - impact, y’know, the earth - you might get pushback from people about why you’re raising a zillion dollars in ways that may negatively impact, y’know, the earth.
But Altman’s response to this pushback this week was absolutely brilliant and also when I decided I’m no longer going to write about the stuff he says here because it’s just pissing me off so much. Here’s his response to questions about the impact of the zillion dollars:
“One of the things that is always unfair, in this comparison, is people talk about how much energy it takes to train an A.I. model… but it also takes a lot of energy to train a human. It takes like twenty years of life, and all of the food you eat during that time, before you get smart.”
On the one hand, positioning your business against “training humans” is so brilliant I might start crying right now. If you need a zillion dollars, saying that you need a zillion dollars because we’re trying to put humans out of business and raising humans is expensive (diapers, college tuition, cupcakes, piano lessons), well of-freaking-course you need a zillion dollars! You don’t need a zillion dollars because competing with Anthropic is expensive. You need a zillion dollars because piano lessons are expensive and multiply that by like 7 billion people and yeah, you can see where the costs add up!
Ugh.
(Also, please indulge me for a moment while I act like a little bitch: Here is a quote from Altman: “In my little group chat with my tech CEO friends, there’s this betting pool for the first year that there is a one-person billion-dollar company…”
Things I hate so much about that:
“Little group chat”
“My tech CEO friends” - is it ONLY CEOs in this chat? Was that on purpose? And do you - as a normal person - refer to any group of your friends by their job titles? If they were actually your friends wouldn’t you say “I was on a group text with my friends?” And THAT is what they are betting on?)
Ugh again.
On to the essays:
The first one I wanted to talk about is called “Something Big Is Happening” and if you’re reading this newsletter right now, I’m guessing that someone in your life shared this with you at some point in the last week. Also, I’m guessing that the person who shared this with you is someone you don’t like very much.
The gist of the article is that the author works in (and, ahem, invests in) AI companies, and because he loves his friends and family so much he needs to tell them something. And that something is that AI’s impact on humanity has already happened and we’re already too late to stop it and I’m only sharing this because I love you.
The brilliant writerly turn he makes in the essay is that he compares AI to Covid, in that when Covid showed up in the US in March of 2020, it was already too late to stop it. And, similarly, blah blah blah AI.
The brilliant part is that we all remember Covid and we all remember that once that fox was inside our collective henhouse there was nothing we could do about it. And I will suggest that the reason this essay was read and shared by millions of people is because that is actually great framing (great for getting people to share it, not great because it makes sense).
His recommendation for dealing with AI’s massive impact on humanity? Why, it’s to use AI more!
“[Try] Using it. Every day, try to get it to do something new... something you haven’t tried before, something you’re not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what’s coming better than 99% of the people around you.”
Yes, the guy who has “spent six years building an AI startup and investing in the space” says that AI is massively impactful, absolutely incredible, and will change our lives if only we spend all of our time and money on it! How selfless!
I wish I had come up with this, but here’s my favorite response to this essay:
Key takeaway: Beware of self-promotion disguised as a life advice.
The second essay was a delightful 7,000 word (?) thought-experiment-cum-doom-porn titled “The 2028 Global Intelligence Crisis,” written by a financial research firm that lays out what the economy may look like in 2028 if AI plays out the way some people have suggested (tl;dr: it ain’t good.)
The essay is thoughtful and makes a boatload of assumptions about how things will play out in softwareland (badly), but I can understand why this essay broke out of the confines of financial online discourse and landed in the broader AI discourse.
The article was written under the banner of “Citrini Research,” but was co-authored by two people.
One of those people has the number 1 selling financial newsletter on Substack. For $999 a year he will sell you investment advice. If you tell people the economy is about to collapse in a once-in-a-lifetime way, they might be more interested in learning how they can profit off of that collapse by, for example, purchasing a $999 subscription for your investment advice newsletter.
(Note to all of the content marketers who read Gobbledy: this is probably the best piece of content marketing I’ve ever seen, maybe except for the Bible.)
The second author of the essay told Bloomberg, “We generally have a set of shorts out against businesses that we think are going to be disrupted by AI.”
So the guy who has a bunch of bets that the economy will collapse because of AI co-wrote a piece that said, “we think the economy will collapse because of AI.”
You will also not be shocked to hear that there was zero disclosure in the essay that the essay about the collapse of the economy was co-written by someone who has made financial bets that the economy will collapse. Oops!
Key takeaway: Beware of self-promotion disguised as doom porn.
As always, thanks for reading to the end - it’s the best part.
P.S. If you would like to cleanse your palate and read about life in San Francisco during the AI boom, I can recommend this wonderful piece from Harper’s called “Child’s Play.” (Thanks to reader Ben K. for sharing). It’s a little bit about gobbledy, a little bit about the startup Cluely, and a little bit about the craziness that’s unique to San Francisco. I loved this:
I saw a billboard that read: no one cares about your product. make them. unify: transform growth into a science. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife.


