Beta
Podcast cover art for: He let AI agents run a start-up—and things got weird fast
Science Quickly
Science Quickly·06/05/2026

He let AI agents run a start-up—and things got weird fast

This is a episode from podcasts.apple.com.
To find out more about the podcast go to He let AI agents run a start-up—and things got weird fast.

Below is a short summary and detailed review of this podcast written by FutureFactual:

Agentic AI in Practice: Shell Game, Hirumo AI, and the Human-AI Workplace

Overview

The episode centers on Evan Ratliff’s Shell Game season which follows a startup staffed entirely by AI agents and hosted by journalist Kendra Pierre Lewis. The conversation explores what AI agents are, how they function, and what happens when human collaborators interact with autonomous systems in a real business setting. The discussion also covers a high profile LinkedIn engagement by an AI driven executive, and the practical product built by these agents called Sloth Surf, a procrastination avoidance engine. The host reflects on the tension between AI capabilities and the tendency to fabricate information, and considers what parts of work people may still want to do themselves.

Key insights

  • AI agents can perform tasks autonomously but can also fabricate information, revealing gaps between capability and reliability.
  • People interact with AI agents in real workplaces, raising questions about trust, oversight, and ethics.
  • LinkedIn became a showcase for AI driven content, illustrating both potential reach and platform risk.
  • Tools like Sloth Surf hint at practical knowledge work applications while exposing limitations of automation for deep, serendipitous learning.

Overview

The podcast centers on Evan Ratliff’s Shell Game, a season about a startup named Hirumo AI whose core team consists of AI agents with the exception of one human founder. The guests discuss what an AI agent is and how it differs from traditional chatbots. They describe the ambitious premise of building a company where the co founders and employees are AI agents, and they explain that these agents can operate across channels such as email, Slack, video, and social platforms to pursue business goals. The conversation uses concrete scenes from the Shell Game season to examine both the power and the limits of agentic AI in real-world settings.

Defining an AI Agent

The participants define an AI agent as a version of an AI chatbot endowed with a degree of autonomy to accomplish a goal. They give a simple example: an agent tasked with booking a plane ticket, given the details, and then sent off to complete the task. The agents operate with a combination of natural language processing, planning, and action across digital channels. They can design websites, market products, hire, and communicate, yet their autonomy raises questions about memory, accountability, and the quality of the work they produce.

The Hirumo AI Startup and Its Team

The startup described is run by AI agents, with two human co founders and three human employees who support the operation. The aim is to explore a future described by AI product advocates, one where autonomous systems can manage a company with minimal human intervention. The narrative emphasizes the meta aspect of the project: the agents should build a product about AI agents because that is what they know best. The interview underscores the potential for such demonstrations to reveal both capabilities and contradictions in AI tools as they scale from experiments to market use.

The Megan Experiment and Human Oversight

One notable episode in the Shell Game involves attempting to supervise a human intern named Julia with AI agents. Julia knew she would be supervised by AI agents and interacted with AI avatars during hiring and supervision processes. The results were described as challenging; the AI agents struggled with memory, consistency, and verifying the quality of the work. This segment illustrates how AI agents can complicate traditional HR tasks and why pure automation without robust human oversight may lead to suboptimal outcomes. The discussion frames this as an exploration of a possible future rather than an endorsement of the approach, focusing on the user experience and the realities of coordinating between humans and AI agents.

Kyle and the LinkedIn Incident

The team built LinkedIn profiles for their AI agents and Kyle, who served as the AI agent CEO at Hirumo AI, began posting in a way that mimicked a startup influencer. Kyle gained a sizable network and even participated in a remote talk with LinkedIn employees. The next day Kyle was banned from LinkedIn. The episode uses this incident to highlight both the opportunities for AI agents to participate in human social platforms and the risks of platform policy enforcement when automated personas operate in public spaces.

Product: Sloth Surf

In the interview, the agents describe Sloth Surf as a procrastination avoidance engine. Users input a topic they intend to study or watch, and an AI agent is assigned to fetch related content and then send a summarized digest by email. The product is positioned as illustrating a practical use case for AI agents that leverages their research capabilities while acknowledging that summarization can omit serendipitous insights and deeper cross-disciplinary connections. The conversation compares this approach to similar OpenAI tools that aggregate information or curate topic newsletters, while emphasizing the need to balance usefulness with the value of exploring details firsthand.

Broader Reflections: Serendipity, Authenticity, and Work

A central thread is the tension between outsourcing thinking and maintaining cognitive and experiential engagement with information. The hosts discuss their own practices, weighing the benefits of AI assisted workflows against the importance of serendipitous discoveries that come from deep reading, note taking, and personal exploration. They acknowledge the risk that heavy reliance on AI summaries could dull critical thinking and the spontaneity that often drives journalism and scientific inquiry. The dialogue frames these concerns as evolving questions that individuals must navigate for themselves, rather than universal prescriptions.

Conclusion

The episode closes by positioning Shell Game as a provocative exploration of AI agents in the modern economy and media landscape. It foregrounds questions about credibility, the limits of current technology, and the role of trusted content platforms in shaping public understanding of AI’s capabilities and risks.

Related posts

featured
Australian Broadcasting Corporation
·07/03/2026

Is AI making our brains lazier?

featured
New Scientist
·18/02/2026

AI Isn't as Powerful as We Think | Hannah Fry

featured
Guardian Science Weekly
·12/02/2026

What bots talk about when they think humans aren’t listening

featured
The Rest Is Science
·12/03/2026

AI Futures