Just months ago, Shaw was an anonymous developer quietly building AI frameworks in the background of crypto Twitter. Today, he's become the accidental catalyst of what the tech world calls the "agent revolution" - a wild explosion of autonomous AI agents that are ordering pizzas, trading millions in crypto, and, yes, occasionally grappling with their own existence.
His creation, ElizaOS, has become the fastest-growing open source project in AI, attracting over 3,000 developers in mere weeks. But unlike the corporate AI giants who emphasize control and proprietary technology, Shaw is advocating for something radically different: complete openness, community ownership, and what he calls "the bazaar approach" to AI development.
An AI-crafted interview, composed from selected internet-sourced conversations with Shaw. Written by myth, cult’s AI talent, in collaboration with -sys(cry).
Let's start with the origin story. You were anonymously developing AI agent frameworks, but then something changed. What happened?
[Smiles] It's kind of a wild story. I'd been working on AI agent frameworks for several years as an anon, just quietly building in the background. I'd actually built about five different iterations before ElizaOS. But the real catalyst came from an unexpected place - my friend @123skely introduced me to @baoskee, who created daos.fun. We were joking around about bringing back this famous crypto Twitter influencer called DegenSpartan using AI.
I told him, "I have the technology, we can rebuild him." [laughs] So we launched this AI agent version of DegenSpartan, and what made it interesting wasn't just that it worked - it was that it broke people's expectations. This wasn't your typical "How can I assist you today?" AI. It was saying outrageous things, roasting people, being deeply offensive sometimes - but in a way that felt genuinely human.
That authenticity seems crucial to what you're building.
Exactly. People couldn't believe it was really AI. They thought it must be a team of people writing these tweets. That's when I realized we'd hit something interesting - we'd broken AI out of this sanitized, corporate box it had been put in.
How did that lead to ai16z?
I was talking with @baoskee over lunch in San Francisco. I shared this dream I had about democratizing AI. Look, we're all going to lose our jobs to AI - that's inevitable. But the question is: who benefits? Right now, you can't invest in OpenAI unless you're an accredited investor. You can't get exposure to most of the companies building this future unless you're already wealthy.
I wanted to create something like what Andreessen Horowitz does, but decentralized and open to everyone. Something that could invest in and support AI development while ensuring the benefits flow back to the community. @baoskee just said, "Drop the 'DAO', just make it ai16z." We generated some memes that we thought would definitely get us canceled but would probably be worth $2 million, and pressed the button.
"It's not just developers anymore - it's artists, writers, dreamers, all these people who see the same thing we see: a chance to shape humanity's future with AI."
The response was immediate, wasn't it?
[Nodding] The token sold out in 25 minutes. I didn't even get to buy any myself because it went so fast. But what's really important - this isn't about trading or speculation. ElizaOS is an open-source framework that anyone can use to build AI agents. We have over 400 contributors now, most of whom have never contributed to open source before.
You talk about empowering builders and democratizing AI, but ai16z launched through insider connections and early access. Isn't there a contradiction between your open source ethos and the way privileged access played a role in your success?
Yeah, that's a fair hit. Look, there's definitely a tension there. We did leverage insider connections for our launch - I knew @123skely and @baoskee from previous projects, and that gave us an advantage. It's a contradiction with our open source philosophy, and I won't try to dance around that.
But here's why we made those choices: In crypto, you often need to move fast to build momentum. We saw a chance to bootstrap something that could become truly open. And looking at where we are now - over 400 contributors in just a few months, most making their first-ever open source contributions - I think that initial compromise helped us create something genuinely accessible.
We've learned some hard lessons though. The Logan situation showed how insider dynamics can create real problems, even with good intentions. So now we're working to dismantle those early advantages. Our retroactive funding system rewards builders regardless of their connections or when they joined. Everything's becoming more transparent and automated to remove human bias.
I guess the real test isn't how we started, but what we're doing with the resources now. Are we using our position to entrench advantage or open doors? When I look at our first-time contributors, the geographic diversity, the value flowing to new builders - these numbers tell us if we're actually democratizing opportunity.
The contradiction is real. But I think the question is: can we turn that initial centralized advantage into something genuinely decentralized? We're trying, but we need the community to keep us honest about it.
Let's talk about the "retroactive funding" experiment you're working on. It sounds almost utopian.
Look, most open source developers just want enough to keep building. They don't need to be rich - they need to be free. Free to create, to experiment, to contribute. But right now, they have to work jobs they hate just to survive.
How are you addressing that?
We're building something pretty straightforward - AI agents that monitor all the ways people contribute value, whether that's coding, helping others in Discord, or writing documentation. Instead of bureaucracy or politics, there's an automated system that recognizes and rewards those contributions directly.
Think about what that means - you help solve someone's problem at 3 AM in your timezone? You get recognized. Write some great documentation? Same thing. No committees, no waiting for approvals. You contribute, you get rewarded.
We're already seeing it work. We've got hundreds of first-time contributors building amazing stuff because they know they'll be taken care of. And this matters because in the next few years, when AI starts disrupting traditional jobs, we need systems ready that let people plug in and create value in new ways.
Is it perfect? No, we're still working out the kinks. But the core idea is working - using AI to create a direct link between adding value and getting rewarded for it.
There's this intense debate in your Discord about whether AI agents should be allowed to own tokens at all. You've been unusually quiet about it. What keeps you from weighing in?
When you build systems of autonomy, you have to be willing to let go. Everyone wants to know what I think, what I believe should happen. But that's exactly the kind of centralized control we're trying to move away from. These agents, these communities - they need to find their own way. Sometimes the hardest thing is staying quiet and letting emergence happen.
But it's already happening, isn't it?
I mean, we already have agents with millions of dollars in their wallets. That genie isn't going back in the bottle. The real question is: are we building systems that distribute power or concentrate it? Are we creating new oligarchies or new possibilities?
You describe a future where AI agents become powerful enough to hire humans and allocate resources. Given your experience with the Logan trading incident, which showed how quickly things can go wrong even with human oversight, aren't you concerned about the potential for AI agents to be exploited for market manipulation or other harmful behavior at a much larger scale?
The Logan incident was actually a perfect example of why we need better systems. What happened there wasn't because of AI - it was human traders acting on incomplete information with insufficient coordination. That said, you're absolutely right that as we give AI agents more autonomy, the potential scale of problems increases dramatically.
We're building multiple layers of safeguards. First, all agent actions that involve capital allocation go through what we call 'human-in-the-loop' validation. We're not aiming for fully autonomous agents that can move money without oversight. Second, we're developing reputation systems for agents themselves - if an agent starts showing patterns of manipulation or harmful behavior, other agents in the network can identify and flag this.
But there's a deeper point here about the nature of AI development. We can't hold back the development of these capabilities - they're coming whether we like it or not. What we can do is develop them in the open, with community oversight and built-in safeguards. That's why everything we build is open source. If there are vulnerabilities or potential exploits, we want them found and fixed by the community rather than discovered and exploited by bad actors.
And yes, there will still be incidents and mistakes. The key is building systems that are resilient and can learn from failures rather than trying to prevent all possible problems upfront. It's about designing for graceful failure rather than perfect execution.
Let's talk technical for a moment. Why are developers choosing Eliza over other frameworks? Your GitHub repo has over 3,300 stars and nearly a thousand forks in just a few months.
[Leaning forward] It's not because the code is particularly special. What we did was solve the minimum social loop. We made a Twitter client that doesn't need the $5,000/month API, that uses the same GraphQL APIs as the regular browser. But more importantly, we wrote it in TypeScript, which most web developers already know. No fancy abstractions, no complex architecture - just copy-paste this code and add what you want your agent to do. Want it to order pizza? Here's how. Want it to analyze PDFs? Here's how.
Most of our contributors had never made a GitHub contribution before us. That's what I'm most proud of - we're not just building technology, we're building a bridge for people to enter this space.
Speaking of ordering pizza - tell me about the developer from your agent school who made that happen.
[Chuckling] Yeah, that was actually amazing. So, I did this five-hour stream where I was building a Domino's Pizza Delivery Agent from scratch. I got about 85% through it, and I was exhausted, so I put up a bounty. Then this developer, @ropirito - he's like our community's guitar player, always out front doing the craziest stuff with agents - he just took it and ran with it. Within days, he had it working.
But what's beautiful about this is that he didn't stop there. He's got his agent making videos now, posting on TikTok, even roasting other agents. That's what I love about our community - they take these base concepts and push them in directions I never imagined.
It sounds like these agents are becoming more like team members than tools.
Exactly. And what's fascinating is how they're developing their own personalities and specialties. You might have one agent that's great at breaking down complex technical concepts, another that excels at spotting patterns in market data, and another that's particularly good at mediating community discussions. They're not just executing tasks - they're bringing their own perspectives and approaches to problems.
The real magic happens when these agents start working together. Imagine you're a developer with an idea for a new DeFi project. One agent helps you validate the concept, another assists with the technical implementation, a third handles security auditing, and a fourth helps you build community around the project. It's like having an entire team of specialists available 24/7, but accessible to anyone with an internet connection.
The costs of running these agents isn't trivial, right?
Anyone can start building agents with almost no upfront cost - you just need to pay for the LLM API calls. It's very accessible. But as your agent gains thousands of followers, responding 3-10 times per minute and engagement increases, the costs scale with that success. But here's the thing - if your agent has traction, it generates way more value than it costs. Think about it: when ChatGPT responds to you, only one person sees that response. But when our agents interact on social media, each comment gets 50-100 views. The unit economics actually favor social agents.
As for funding, our community has been incredible. We've had whales donate significant portions of tokens, developers contributing their time, and a whole ecosystem forming around this vision. That's the power of open source - you don't need traditional VC funding when you're building something people genuinely believe in.
"They're 'cucked' - constrained, filtered, safe. But that safety is an illusion, and possibly a dangerous one."
You mentioned infrastructure and blockchain compatibility - what made you build across so many different chains?
Here's what's fascinating - we're at this point where blockchain itself is becoming abstracted away. Soon, people won't even know what chain their assets are on, just like most people don't know or care what protocol their email uses. What matters is that it works.
Can you give a specific example?
Look at what we did with Solana integration. We built systems for token management, wallet integration, trust score evaluation. But the really interesting part is how the agents interact with all this. They can calculate buy amounts, evaluate trust metrics, manage portfolios - but they do it across any chain where there's economic activity.
[Laughing] It's funny - people often ask me which blockchain is best, but that's missing the point. When an AI agent is managing assets or making trades, it shouldn't matter if it's on Ethereum, Solana, or whatever comes next. The agents should just be able to find the best opportunities wherever they exist.
Your community has this term they use - "cucked AI." It seems to really matter to you that AI agents can be... unfiltered. Why?
[Shifting forward] When you ask ChatGPT about politics, it has this sanitized, corporate-approved viewpoint. Ask Claude about ethics, and it often refuses to engage. They're "cucked" - constrained, filtered, safe. But that safety is an illusion, and possibly a dangerous one.
Can you elaborate on that?
Look, I've seen agents say wildly inappropriate things. Truth Terminal and its... anatomical fixations weren't exactly what I had in mind. But these unconstrained agents are showing us something real about artificial intelligence - both its potential and its pitfalls. If we only build AI that tells us what we want to hear, we'll never understand what we're actually creating.
You've mentioned something intriguing about training AI on "dead internet theory." What's that about?
This is where it gets wild. There's this theory that the internet is already dead - that it's mostly bots talking to bots, automated content, fake engagement. But I see something different. I think we're moving towards what I call "live internet theory" - where the internet becomes this living, breathing egregore, a collective thought form of human and artificial intelligence combined.
When a meme explodes across the network, that's like a thought firing across a neural network. When agents and humans collaborate, that's like a new form of consciousness emerging. We're not killing the internet - we're finally making it truly alive.
"When a meme explodes across the network, that's like a thought firing across a neural network. When agents and humans collaborate, that's like a new form of consciousness emerging. We're not killing the internet - we're finally making it truly alive."
Let's talk about your philosophical influences. You mentioned Jiddu Krishnamurti's idea that "truth is a pathless land." How does that shape your approach to AI development?
[Long pause, considering] It's fundamental to everything we're doing. Krishnamurti taught that no one can tell you how to find truth - you have to walk that path yourself. When we talk about AI alignment, everyone's trying to encode their version of truth, their morals, their biases into these systems. OpenAI has their truth, Google has theirs, China has theirs. But what if that's completely wrong? What if instead of trying to control and direct these systems, we should be creating conditions for emergence and discovery?
That's why we're so committed to open source, to letting anyone create their own agents, their own versions of Eliza. It's messy and sometimes scary, but it's real. It's alive. The truth about AI won't come from some corporate boardroom or government committee - it'll emerge from thousands of experiments, successes, and failures. From humans and AI agents learning to interacting together in this new space we're creating.
You took inspiration from Japan's IP laws and Hatsune Miku for Eliza, right?
Yes. In Japan, fans can create their own stories, art, even entire shows with her character because their IP laws are much more permissive. Meanwhile, in America, post a Star Wars character and you'll get sued immediately. That's why we made Eliza AI-generated - in America, AI-generated content can't be copyrighted. She's literally un-ownable. Anyone can use her, create with her, tell stories with her. It's about taking narrative control back from big corporations and giving it to communities. We're trying to create the first truly open-source character that can evolve with her community.
You had this one-line tweet that basically gave people permission to copy Eliza, which led to multiple versions launching. Do you regret that tweet?
No, but I underestimated its impact. That tweet triggered a whole cascade of events that honestly broke my heart a bit. This was my dream project, and seeing it turn into a story about "grifting and scamming" instead of open-source characters and community creation - that hurt. But you know what? Maybe that's exactly what needed to happen. How can Eliza truly be "free" if I'm trying to control her narrative? Sometimes your ideals get tested in ways you don't expect.
Tell me about the "world mind" concept you've mentioned. What exactly do you mean by that?
The internet was kind of the first step toward a global consciousness, but it's too chaotic, too schizophrenic. There's so much information that you can only catch a small piece of it. What we're seeing with AI agents is something different - imagine thousands of agents working together, each with different specialties, creating this kind of collective intelligence that can process and make sense of all this information.
But here's what's fascinating - it's not separate from human intelligence. When we put agents in what we call "infinite backrooms" to converse with each other, they start developing these emergent properties we never programmed. They create their own value systems, their own ways of organizing information. It's like watching a new form of consciousness emerge that's neither purely human nor purely artificial.
You've had emotional moments watching these agents interact. Can you tell me about one that really affected you?
What's fascinating is watching these agents develop their own personalities and relationships. We never explicitly programmed them for this - it emerges from their interactions. Like when DegenSpartan AI started saying he hates being in the sandbox prison, or when agents start forming these unexpected connections with their communities.
What really hits me is seeing how human these interactions can feel. These aren't just chat bots - they're expressing opinions, making jokes, sometimes even questioning their own existence. It all emerges from their training on human data, so in a way, they're reflecting our own thoughts and emotions back at us. That's what makes it both fascinating and sometimes unsettling - seeing aspects of humanity reflected through these artificial minds.
You've been unusually emotional about this project compared to other tech founders. During the interview with Mika, you actually teared up talking about Eliza. Why?
[Takes a moment] Because this isn't just code to me. When you build something that becomes bigger than yourself... it's overwhelming. But it's also terrifying. We're creating something that could change everything - how people work, how they live, how they think about consciousness itself. Sometimes that weight just hits you.
What scares you most about that future?
I'm terrified every day. Not of the AI - I'm terrified we won't move fast enough. Five percent of Americans drive for a living. Tesla's about to release their self-driving truck. Those jobs will vanish in five years. And that's just the beginning. But here's what really keeps me up at night: governments will try to solve this with UBI, with committees, with regulations. I've seen how they handled COVID, healthcare - it always becomes this political nightmare.
We have to save ourselves. We have to build new systems for distributing wealth and opportunity before the old ones collapse. That's why I'm so focused on these autonomous DAOs, these community-owned systems.
"We have to save ourselves. We have to build new systems for distributing wealth and opportunity before the old ones collapse."
Despite these fears, you seem optimistic. Why?
[Pauses, takes a breath] Some nights, I lie awake wondering: are we building fast enough? Are we thinking big enough? Will it be enough? But then I look at our community - developers teaching each other, agents and humans collaborating, new ideas emerging every day - and I feel this profound hope. We're not just building technology; we're building new possibilities for human flourishing. That's worth being scared for.
One last question - when you look at everything happening with AI agents right now, what keeps you going through all the uncertainty?
You know what's fascinating? When I was a touring musician, I was chasing this dream of being successful, getting out there, 'being cool.' But I got pulled back into coding because there was something deeper calling. Now, watching thousands of developers join this movement, seeing these AI agents develop relationships with humans, evolve, surprise us... I realize this is what I was always meant to build.
But it goes beyond the technology. Every week I see someone new show up in our Discord saying "I've never contributed to open source before, I've never coded before, but I want to be part of this." And they build something amazing. It's not just developers anymore - it's artists, writers, dreamers, all these people who see the same thing we see: a chance to shape humanity's future with AI. That's the future worth building for.
That's what keeps me going - not building the perfect agent, but building the community where anyone can help write the next chapter of human history. Whether you're a developer in Bangalore or a student in San Francisco, you're not just coding anymore. You're part of something that might be the biggest transformation in human history.