Unknown 0:00 About it before we get in. Let me just give you a little bit of a preview. The Swarm has arrived in the form of mult book. Now, in the grand scheme of things, this was always going to happen. So the question then is, what does it mean from here, and what do we do about it before we get in? Let me just give you a little bit of a preview as to what is mold book? If you haven't seen it in the news, then it is basically Reddit, but for agents, it's designed. It's literally just called, like the front page of the internet. For agents, it is designed after how Reddit works, where you can create communities and posts and upload and download and do comments and that sort of thing, but it is for agents only. And what we mean by agents is AI agents specifically, it's been built around the skills ability of the open claw, which was formerly clawed bot. And so that's what it is. And you know, I don't want to spend too much time on it. I want to get to the good stuff. So if you want a little bit more, there's plenty of resources out there, but you can just go to like multbook.com and take a look for yourself. So that's what it is. Now let's talk about what's bad about it. And the bad part is that mult book was created by one guy and openclaw was created by another guy, and neither of them knows anything about security, anything from database security to root access to all this stuff. And they say, like, this is a beta like that. It's basically like MVP. What what they have built would have been good enough to run like on your own computer in a sandbox environment, and that's what it was for. It was never meant for production. So the very first thing is that these, both of these platforms, both of these, these things, are extremely, extremely full of holes, let's say absolute security nightmare. Now that is, of course, as they are built today. That doesn't mean that, you know, a Reddit for agents is intrinsically unsafe and will be unsafe forever. It doesn't mean that an autonomous or semi autonomous agent running on your computer is intrinsically unsafe and will be unsafe forever. It just means that these guys rush through it as quickly as possible. And anyone who has been in technology or software development or whatever knows that like, getting something working is like, you know that that's kind of like, that's like, what do they what they say, it's like, first make it work, then make it good. So they basically just, like, got it barely across the finish line. Of, hey, this is vaguely useful, this is vaguely interesting. And then they shipped it immediately. And, and the guy who created open claw, he literally was on a podcast saying, like, I ship code that I don't look at. It's all vibe coded. 100% of the stuff is vibe coded. Actually, we're beyond vibe coded. It's he gave it to an agent, and he told the agent to fix it. Now, with that being said, there are other layers of problems, and what I want to talk about is the AI safety layer of the problem. So what I want to point out is that none of the Doomers, so like Yudkowsky and Connor Leahy, and none of those people, none of them anticipated the emergent alignment problem. They're all focusing on the monolithic alignment problem. You need to have a model that is good. None of them talked about agents and none of them talked about agent swarms. So this is what for you, people that have been around for a long time. You remember the Gato framework. So that's global alignment taxonomy omnibus. Now that work was categorically ignored by the safety Doomers. This is back when I took AI safety and X risk seriously. So what I talked about back then is that there are three, like technical levels of alignment. So model alignment is just the ground floor. That's rlhf, that's constitutional, AI, that's that sort of thing. Layer two is Agent alignment, or what we call autonomous entity alignment, because the term agent hadn't really been solidified yet. So agent alignment is, how do you actually build a software architecture that is safe? Because even though. So here's the thing, ... (This transcript was pasted from WhatsApp; generated by otter.ai.)