The plateau

September 4, 2025

I’ve long been skeptical of the widespread messaging from the AI industry that superintelligence is imminent, and Large Language Model (LLM) technology is its progenitor. I wrote in a previous post about my qualms with this and the reasons behind them. Lately, it seems that there’s been a shift in the wind—one large enough to sweep even true believers like Sam Altman in its wake.

This recent New York Times opinion piece (paywall-free gift link) is the most recent sign of the times. In it, Gary Marcus offers a critique of the industry and its overreliance on a fundamentally limited AI architecture. I’m in broad agreement with his assessment.

This is good

Of course, as a skeptic, I view this as a positive development, and I hope the trend continues. As members of the public, we’re seeing a lagging indicator, so I presume that there’s already a significant shift in strategy in all the major labs.

And AI needs a shakeup. I’m not going to go so far as to say we’re in a bubble; I view it as more of a monoculture. That’s unhealthy, both for the industry, and for everyone affected by it. I hope that people outside places like Anthropic and Google—and especially the worst offenders, OpenAI, Meta, and xAI—can take a more prominent place culturally, academically, and economically.

The present dangers

Here are some of the most concerning trends I see. At the root of each are the inherent vulnerabilities of LLMs.

Weaponization. Not my term—Anthropic’s! For just one example, take a look at their article detailing the use of Claude Code to commit large-scale theft and extortion of personal data.

Pushing into the youth market. AI toys like the announced Barbie project are a horrible idea. This toxic stuff will make the teen social media crisis seem quaint.

AI companions. Parasocial interactions with Grok Ani, character.ai, and their ilk are more likely to exacerbate our existing loneliness crisis than they are to make any meaningful positive change.

Security vulnerabilities. We simply can’t guard against certain classes of attack. Simon Willison has a great explanation of the “lethal trifecta” that shows no sign of being solvable. The more powerful we make LLM-based AI, especially with the newest trend of agents that can take control of web browsers and other apps, the more dangerous they become.

Economic impacts. Though the extent is debatable, it’s hard to argue that AI hasn’t had a negative effect on the job market. Here’s a recent Fortune piece that cites new research from Stanford.

Government. We’re woefully behind in reacting to AI in many places, but government has to be the worst. The lack of any true policy or legislation is, frankly, terrifying. I have zero faith in our current leadership to steer the country. And that has impacts in every nook and cranny of our lives, from policing, to education, to elections, to the media, to the surveillance state.

Where are we going?

With a shift underway, can we seize on the opportunity to bend the curve in a positive direction? It’s rare to find such unambiguous inflection points in a rapidly-developing technology.

I wish I could say I was optimistic. The forces driving AI are so massive that the “AI Pause” open letter—with signatories like Elon Musk(!), Yoshua Bengio, and Stuart Russell—had as much impact as building a dam with a teaspoon. We’re doing well if we can just ride the wave and avoid the worst.

But I can’t give in to the gloom; it’s not in my nature. I recently joined Playlab AI as a Learning Engineer, and I’m happy to be part of an organization that recognizes the risk of AI alongside its potential, and is actively working to impact the technology for the better. When the world looks so overwhelming, as it does in so many ways these days, the best you can do is find your own small way to contribute to the solution.

Written entirely without the assistance of AI unless otherwise mentioned, with the exception of extremely light proofreading for grammar and spelling.