Sponsored By: Destiny
Own Game-changing Companies
Venture capital investing has long been limited to a select few—until now.
With the Destiny Tech100 (DXYZ) , you'll be able to invest in top private companies like OpenAI and SpaceX from the convenience of your brokerage account.
Claim your free share before it lists on the NYSE. Sponsored by Destiny.
AI may be the most significant invention since electricity. It is a marvelous, spectacular evolution in our relationship with computers.
It is also terrifying.
Some people, maybe many people, will lose their jobs if AI continues to progress at its current rate. This disruption is scary. When you mix AI fears with the crash in public opinion about the tech sector, it is not surprising that misinformation is rampant. What is surprising is how much of this incorrect analysis comes from mainstream writers with large followings.
I strongly believe we should be rigorously examining AI. Technology should be critiqued, written about, and, most importantly, utilized. It can only make our world better when examined with truth, not hyperbole. AI is important enough that society should be considering the implications of its deployment. But progress can’t be made when everything you believe about a sector is incorrect.
The piece that most clearly symbolizes this trend was published by Ed Zitron with the wonderfully cranky title of “Subprime Intelligence.” Zitron is the founder and CEO of tech PR company EZPR. He’s also a firebrand of a writer who has a particular talent for the incensed outrage that does so well on X, where his post went viral. His new podcast, on which he analyzes tech news, hit number one in the tech category on Apple. I genuinely think he has the gift of the gab and congratulate him on his success. However, his argument against AI is of an all-too-common variety among mainstream tech critics: It is weak, ungenerous, and riddled with mistakes.
Normally, I wouldn’t focus on a single person for a rebuttal. But his reasoning is emblematic of popular writing on AI today and is therefore a useful device so as not to make straw man arguments. By walking through his thesis, I hope to more broadly push back against current coverage to encourage our discourse to move in a more accurate direction.
Zitron’s argument can be summarized as follows:
- Artificial intelligence is limited by hallucinations (i.e., the model making stuff up). These problems can never be fixed.
- Generative AI’s creative products are unusable because what it generates isn’t perfect.
- Companies are not actually using AI. They’re just going along with the fad.
- AI Companies have lower gross margins than software companies, and therefore, they are bad businesses.
- There are no essential use cases or killer app for AI.
Unfortunately, each of his points is incorrect.
Hallucinated problems
“Despite what fantasists may tell you, [hallucinations] are not ‘kinks’ to work out of artificial intelligence models—these are the hard limits, the restraints that come when you try to mimic knowledge with mathematics. You cannot ‘fix’ hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.”
It is remarkable to claim that hallucinations present hard limits to AI’s possibilities, or that they can never be fixed. Four days before Zitron published his piece, Google released its new model, Gemini 1.5, which makes huge strides towards fixing them. Gemini 1.5 can take a context window of 1 million tokens and recall text data with 99 percent accuracy. That is like being able to recall any sentence from the entire Harry Potter series with a photographic memory. Google said that it had been able to test Gemini all the way up to 10 million tokens. OpenAI’s most powerful model currently tops off at 128,000 tokens. With Gemini 1.5, you can drop your entire dataset, documentation, script, or codebase into the AI, and it will be able to summarize, improve, or fix it with near-perfect accuracy.
Technically, this model does not completely solve hallucinations for LLMs. Longer context windows won’t stop these models from hallucinating when they don't have access to datasets. But it does allow an LLM to recall, with superhuman accuracy, useful data and insights—a performance that violates Zitron’s claim that LLMs can never “predict things…flawlessly or reliably.” Additionally, Google isn’t the only company that has achieved this milestone. The Information reports that a startup called Magic has an LLM that can handle 3.5 million words of text.
A good tech critic would be aware that not only have context windows solved many hallucination problems, but through techniques like Retrieval Augmented Generation (RAG), error rates will decrease such that they are better than human levels of accuracy.
A good tech critic would realize that the problem isn’t the hallucinations; it is how and when LLMs gain access to data, and who controls that data pipeline. Understanding how ChatGPT gets current information is a more accurate and long-term view of this technology than worrying about hallucinations.
Even if you assume that Zitron missed Google’s Gemini launch, his assertion is nonsensical. We don’t need these systems to be flawless—they just have to be better than human error. RAG is already used in products like Notion AI, which has over 4 million users. You drop your data, calendar, and notes into Notion, and the AI can retrieve what you’re looking for. As context windows get longer, models get smarter, and techniques get refined, it is reasonable to assume that hallucinations become a problem that we learn to work around (just like human error). We’re already making large strides, and ChatGPT only came out a year and a half ago.
No, the product doesn’t have to be perfect
A theme of Zitron’s argument is that generative AI’s output isn’t good enough. He writes: “One cannot make a watchable movie, TV show, or even commercial out of outputs that aren't consistent from clip to clip, as even the smallest errors are outright repulsive to viewers.”
Again, this is incorrect and misses the point. The point of generative AI is not to make clips from scratch that are immediately watchable. Zitron trips up because he believes that the sci-fi version of AI that’s been marketed to the public— beautiful movies, on demand, that anyone can make—is what founders are aiming for. Rather, we are shooting for the moon so that if we fall, we land on a cloud—and that cloud entails immense gains in productivity.
The goal of these tools is to help creators make better things faster. And they’ve already done that! All of our graphics at Every are made by one mega-talented guy in Milan who simultaneously handles our ads, partnerships, brands, podcasting editing, and course operations while also going viral on X. All of this was previously beyond the scope of any one person, but he has taste and vision, and uses generative AI to do more, faster. These tools are also used to make people at top-tier creative shops, like The Late Show with Stephen Colbert, Vox, and Pentagram, more productive.
Zitron’s critique about perfection is particularly wrong because perfection has never been the main requirement of art. Popular and heralded movies aren’t immune from errors, like a visible cameraman in Harry Potter and the Chamber of Secrets and a plastic baby in (Best Picture-nominated) American Sniper. Humans make mistakes all the time! We tolerate them because the end product is good enough that we don’t care.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools

Comments
Don't have an account? Sign up!