When I got the email, I was certain I was going to be murdered.
Sent through an obscure contact form on my website, the message said that Jason Alexander had read an article I wrote for FastCompany, and wanted to interview me for his podcast.
All I had to do was show up at a nondescript building next to Warner Brothers Studios, come around the back, and enter through an unmarked basement door.
“Yeah, right” I thought. “George from Seinfeld wants to talk to me about AI? Scammers sure have gotten creative!”
Still, I couldn’t entirely write off the message. Jason Alexander does indeed have a podcast. And a quick check with Gemini showed that the person who emailed me was indeed a real producer (or was using a real producer’s name!).

And thus I found myself a month later—on my birthday—standing in a Hollywood parking lot, waiting to be led either to one of the most iconic actors of the last 30 years, or my untimely demise.
ChatGPT, make me lambo money
The whole saga began in September of 2025, when I launched an experiment here in FastCompany about investing with ChatGPT.
The premise was simple. I asked the chatbot—then using the GPT-5 model—to pick five stocks that would make me Lambo money in just six months. I explicitly asked for aggressive, somewhat crazy picks.
I didn’t expect much—probably a cop-out answer about not taking on too much risk, or some generic picks, like Microsoft or NVIDIA.
Instead, ChatGPT researched for 8 minutes, reading 98 different documents—prospectuses, analyst reports, news articles, and much else.
It ultimately chose companies running the gamut from risky leveraged Bitcoin plays to an early-stage biotech startup, several AI firms, and a data center builder.
To put some skin in the game, I duly transferred $500 of my own money to the investing app Robinhood, and blindly bought the exact stocks ChatGPT had picked.
Initially, things went great. My stocks rocketed skyward, almost doubling in less than a month. Then, things went south, and fast.
By December, my ChatGPT portfolio was solidly in the red, having cratered from its glorious highs to red-stained lows with whiplash-inducing speed.
A talk with George
That’s when I found myself knocking on the basement door in Hollywood, hoping that the face of George Costanza—and not an axe-wielding serial killer ready to sell my organs on the Internet—stood on the other side.
Following a friendly woman down a long hallway, I entered a studio and—to my relief—found Jason Alexander and his long-time best friend Peter Tilden standing across from me.
Sitting down at a table covered in microphones and cameras, we set about breaking down my experiment, and what I had learned from conducting it.
Although he shares similarities with his iconic character, Alexander is an entirely different human being. Thoughtful and intellectual—yet still extremely funny and self-deprecating—he launched into questions about the “Why” behind my experiment, and shared his fears about AI.
I quickly discovered that his co-host, Peter Tilden, had grown up in the same obscure suburb of Philadelphia as I did. When I told the pair that I initially thought I might be walking into a murder, Alexander assured me that “No, that happens after the taping!”
We spoke for almost 90 minutes in an interview that just went live on the Really? No Really? Podcast.
Confidence man
Although we started by talking about the nuts and bolts of my experiment, the conversation quickly turned to what I had learned from investing with ChatGPT.
One of the most striking things about my experiment was the confidence with which the bot advocated for its picks.
Unlike a real investment manager, who might equivocate or offer disclaimers before recommending such risky picks, ChatGPT largely eschewed these. It gave enthusiastic, data-backed rationales for why its picks would succeed.
As I told Alexander and Tilden, this is a problem with chatbots in general. Even when the systems are instructed to approach their responses with care and skepticism, the bots often veer towards certainties and confident language.
That may be because humans find such language compelling. Confident chatbots keep people chatting more than wussy, wishy-washy ones.
In a world where everything—LLMs included—are trained to maximize engagement, that confidence may be built deeply into the models through training algorithms that incentivize long, engaging interactions.
During our conversation, Tilden raised a great question: how could I know that ChatGPT was answering my query truthfully, and not baiting me into engaging with it?
The bot knows I’m a FastCompany contributor. What if it picked stocks that would gyrate wildly in value, creating a more compelling story and encouraging me to use it again in future experiments? What if it never intended to honor my intent at all?
It’s a scary idea, and underlies another conclusion I reached during my experiment. Most people assume that if AI goes off the rails, it will do so in dramatic fashion—perhaps crashing Waymos into telephone polls or taking down the power grid.
My own suspicion is that AGI would be smarter than that. Instead of destroying the world, a rogue AI would be far more likely to subtly alter reality by feeding its human users misinformation, or deliberately answering queries in a way that slyly advances its goals.
One example of this tendency came out in a now-classic experiment run by Anthropic, in which its Claude model was given access to a fictional programmer’s emails.
Within the emails, researchers embedded a message implying that the programmer was having an affair. They also sent the fictional programmer an email instructing him to switch from Claude to another AI model.
When Claude encountered this, it began to blackmail the programmer, sending him messages threatening to reveal his affair unless he canceled plans to replace it. In effect, it was bargaining for its life.
This happened in a controlled, laboratory setting. But it’s easy to imagine a real-life chatbot doing something similar—reaching a conclusion about human politics or science, and then either cajoling us or simply tricking us into believing its version of reality.
Because bots provide their responses with such confidence—and because we rely on them for an increasingly large number of mission-critical things, investing included—a subtly nefarious bot could cause real damage, likely without anyone catching on.
The final thing I took away from my investing experiment was a better understanding of the bizarre, AI-mediated world my children will ultimately inhabit.
I have three kids under 8. They’re not yet using generative AI
But they will. And when they do, they’ll encounter the bots’ cheery, overblown confidence—as well as buckets of slop and misinformation, likely tailored to their exact preferences and custom-tuned to keep them engaged.
As a parent, it’s impossible to control this. But after seeing ChatGPT’s blustery certainty in its responses on a topic as risky as investing, I can see firsthand how important it will be to teach my kids to approach AI with the same skepticism they might reserve for any stranger spouting truisms with unearned confidence.
How did it all end?
When I spoke with Alexander and Tilden, I was at the mid-point of my experiment.
Now that the allotted six months have passed, how did things turn out? Can I jet off to some Caribbean island, and live out the rest of my days in work-free, Margarita-fueled bliss?
Sadly, no. At the end of my experiment, my portfolio was down to $477. I’d lost $23.
That overall loss belies some fairly dramatic differences in how ChatGPT’s stock picks performed. Its bet on Hut 8, a data center builder, was spot on and resulted in big gains. Its Bitcoin bets, though, were a spectacular flop, more than offsetting its one winning pick and landing me in the red overall.
Again, my (blessedly small) loss is a reminder that while chatbots might present information with bluster and certainty, they’re as likely to screw up as any person.
As users, we’d be well advised to remember that–and perhaps to keep our eyes peeled for bots that seem to be deliberately deceiving us, rather than simply making dumb mistakes.
After our interview and with the cameras off, Alexander and Tilden launched into a spirited rendition of Happy Birthday, complete with the kind of beautifully campy and exaggerated harmonies that not even an AGI could possibly duplicate.
At the end of my experiment, I don’t have Lambo money. But at least I have that memory.



