Today, OpenAI launched Frontier. This marks a pretty big strategic shift into what they plan to provide moving forward. 1 hour after OpenAI launches, Frontier Anthropic launches Opus 4.
6 and a few new functions that show that they're also targeting roughly the same area as OpenAI. And right around the same time, we have a brand new interview with Elon Musk talking about a lot of things, but one of them about emulating humans. As you'll see, all three of those announcements, well, they're talking about the same exact thing.
Here's a quick clip that gives you a hint as to where this is going. >> So, if you have a human emulator, uh, you can basically create one of the most valuable companies in the world overnight and you would have access to trillions of dollars for revenue. It's it's not like a small amount.
>> I see you're saying basically like revenue figures today are just like so like they're all rounds compared to the actual TAM. So, just like focus on the TAM. In in fact, you can can really think of it like the the most valuable companies currently by market cap their their output is digital.
Nvidia's output is FTP files to Taiwan. >> It's it's digital right >> now. Those are very very difficult to high value files.
>> So the only ones that can make files that good, but that is literally their output. They FTP files to Taiwan. >> Do they FTP them?
>> I believe so. >> Um I believe that is file transfer protocol I believe is is is I could be wrong, but anyway, it's a bunch of it's a bit stream going to Taiwan. You know, Apple doesn't make phones.
They uh they send files to China. Microsoft doesn't doesn't manufacture anything even for Xbox that that's outsourced. They again it's they output is digital.
Uh Meta's output is digital. Google's output is digital. So we're basically heading towards this idea of emulating humans or you can think of it as labor as a service.
Just like we had SAS software as a service in the past, now we have labor as a service. So let's talk about OpenAI's frontier. This brand new thing that they've announced.
So, OpenAI identified a pretty big bottleneck, you can say, in how these smart AI models function in the enterprise and the corporate setting. They are incredibly smart, but as you might have heard, a lot of these companies have a very hard time actually implementing them in their own workflows. They struggle to effectively deploy these AI agents because they often miss context about the specific company they're supposed to be working for.
They're often kind of isolated from all the data that they need about the company. they don't have hands that they need to actually get the work done and Frontier really feels like the bridge that is being built to close that gap. So first and foremost the idea is to kind of use this glue like thing to connect all the companies things together databases the Slack communications customer management a lot of the data these companies it's kind of siloed in different departments.
So the idea here is to kind of unify it all and give these AI agents access to it. So that's kind of like the semantic layer. It's the the understanding layer.
And on top of that, we're giving these AI agents the ability to do stuff, to take action. So they're able to use the computer just like a human being would to write code, see code, execute code, manage files, navigate software. And this new approach treats these AI agents just like you would a new employee.
Instead of just spinning it up and then prompting it, you onboard it. So you have a human manager that watches it kind of go about its business, correcting it when it makes mistakes. So there's a a feedback loop.
So very similar to again how you would onboard a human employee. That's more of the approach that OpenAI is taking with Frontier. Now they're solving a huge problem for these enterprises.
So in the past we've talked about the shadow AI economy that exists at a lot of workplaces and corporations. The idea is that almost 98% of employees, this is I believe they looked at just America, but I'm sure it's similar across most of the world, 98% of employees use unsanctioned AI apps at work. So even if their IT department tells them, hey, make sure you're not using a chat GPT or or whatever to help you with your work.
And even if the company doesn't pay for those tools, employees still use them. They purchased them on their own a dime and oftent times against company policy use them to help them complete their work. So there's this massive gap because only 20% of people use company provided vetted AI tools.
80% of employees use whatever other AI tools they need to complete their work. And that sort of giant gap that's the shadow AI economy is people using personal tools, unvetted tools. And this is not just interns or people lower in the ladder.
93% of executive and and senior management use these AI tools compared to roughly 60% of the general workforce. And almost half of the employees admit to sharing sensitive data, putting it into the chatbot. So a lot of these companies are kind of in crisis mode because they know that these tools are being used.
They know there's probably some pretty serious security implications. They want to use AI to help their employees to increase productivity. But we've covered a few studies here by MIT, I believe, that show that there's a big gap.
These companies haven't figured out how to adopt this technology and make it fit into their corporate workspace. So, what OpenAI is doing is potentially kind of brilliant. We we'll see how the execution is, but they're basically putting boots on the ground, so to speak.
So they're sending their own trained engineers into these enterprises to help them set up and run on top of of course the the open AI infrastructure. And they've announced a lot of very large corporations that already are on board State Farm, Uber, many many more. And so there's sort of like two mental models that we can use to see what they're doing.
One is they're basically building their own operating system just like Windows for computers or Apple or Linux. This is going to be the open AI operating system for running these agentic workforces. They're also using open standards probably in the hopes that the other AI agent corporations like Oracle and Salesforce that they built on top of the OpenAI infrastructure which is again that frontier framework that they've just announced today and that there might be a massive massive early mover advantage here because if you're competing in the space with other companies and one of them on boards these AI agents first over time trains them how to operate how to run properly within that company's these environments, right?
So the the agents are learning that's whole ecosystem is getting more mature and another company is not doing that. That productivity gap might be very very large and behind the scenes the same thing seems like that has happening at XAI where they're trying to kind of go into that direction. That's what the whole uh macro hard organization is about.
Recently there was an interview with an XAI employee. His name was Sullean Gory. He did a great interview.
He revealed quite a bit about XAI and then in a few days he announced that he was leaving XAI which I think a lot of people took to mean that he said too much on the interview and therefore was let go. And that's probably the first mention of emulating humans that I've heard. So the idea that is that there's AI agents on the XAI or chart and they act a lot like humans.
They use a computer, they email back and forth. And sometimes this creates absolutely hilarious situations where, you know, if you're working at a company, you're messaging somebody on whatever messaging platform they're using, and some of the employee tells you, "Hey, can you come over to my desk? Let's let's uh let's talk about this.
" And apparently, this actually happened at XAI where employees would go there, there's no desk, there's no humans. And they would talk to one of the engineers going, "Hey, this sort of employee told me to come over. Where is he?
" the engineer is like, "Oh, that's one of our emulated humans. They don't actually exist. It's an AI agent.
They're on the ORC chart and they're sort of on our systems. " They're not actually physically present right now here. They don't exist.
And so, the idea is to design an AI agent that is able to do everything that a human being can on a computer. Also, there's some hints again in this interview. It makes sense to me that that they fired this person for maybe saying too much because a lot of the stuff we haven't heard before from OpenAI and and they tend to be fairly secretive about what they're doing on the inside.
But the idea is that XAI is training future models by basically mimicking everything that a human being is doing on a computer. So as you're working on a computer, you're clicking on stuff, it collects all that data and that's used to train models that are become basically kind of a digital twin of an actual real life worker. So the idea is if you can simulate the outputs of a worker in a digital environment, you can effectively kind of recreate the outputs of a massive massive corporations.
I mean when you think about what Elon was saying in that clip, right? So Google's output is digital. Meta's is digital.
even Nvidia. I mean, maybe that's a little bit of a stretch, right? That the idea that all Nvidia does is FTP's files over to Taiwan.
So, he's basically saying that Nvidia comes up with the the blueprints or the specs and they they send them over to Taiwan to have them produced various AI chips, uh, GPUs, etc. But maybe that's not too far from the truth. I don't actually know exactly how they design those chips.
I mean, it's very possible it's all done with CAD software. There's got to be some physical testing at Nvidia, I assume. I I have no idea.
But the point is I'm sure that a big portion of the value to produce the actual output it's done on a computer by a human being. Some fraction of it has to be actual physical stuff that has to be done but a lot of it is computer work. That is 100% true for for Google and Meta and Apple and Microsoft and apparently also Nvidia which is kind of counterintuitive but certainly you can contrast that to companies like Tesla or SpaceX that do have a very large and important physical component.
Tesla produces, builds their own cars physically. They don't just come up with the blueprints, the specs, the designs, and then send it over to elsewhere to produce. Part of Tesla is actually producing cars.
Part of SpaceX is producing all the rockets and all the physical stuff that that they have to do. Whereas that's not the case for Apple or for Microsoft and and Xbox, for example. So his theory, his thesis, whatever you want to call it, is that if you're able to emulate employees, if you're able to emulate humans, then the TAM, the total addressable market is like trillions or or that that's the amount of revenue that you can get if you're able to automate or emulate human employees.
And certainly we're seeing OpenAI, their frontier is seemingly something very similar to that. You can almost think of it as a HR department, but for AI agents, right? right?
They want to come in to a large enterprise and B just like the the human resource department but for agents to to train the agents to do the onboarding to give them all the different skills to manage them right and that corporation pays some you know percentage of the revenue to open AAI similar to how we used to have software as a service I'm sure it would be something similar here but it's labor as a service openai provides the AI agents that do the labor and openi gets paid for the ongoing providing of that labor Anthropic is doing something very very similar. We saw that with today's release of Opus 4. 6.
They're also launching Agentic Teams. They're also launching Claude Co-work with the various plugins and people are feeling it. The stock market is reacting.
So, this kind of went on in in stealth mode for a while, but now I think more and more people are waking up to this, right? these digital twins, these AI agents. Um, people I think were a little bit dismissive leading up to this, but now we're we're seeing it actually interacting with real life companies with places like State Farm.
And yeah, we're probably going to be seeing reactions to this. There's going to be some backlash as people understand kind of what's happening. Google and Google DeepMind, they they haven't talked about this particular aspect too much.
But noticed that Shane Le, one of the co-founders of Google DeepMine, recently hired what was it called that the job title was chief AGI economist. And in one of the interviews, he basically said that this idea of humans in a society providing our labor in exchange for resources, right? So jobs and the economy and everything that goes into it.
He's saying that system is going to get really badly disrupted by what's coming. And so we need to start thinking about what this new AI economy looks like, which of course here on this channel, we've been talking about that quite a bit because it does seem that there's just not enough smart people really seriously thinking about this. We don't have an economic model that walks us through how the transition is going to take place, right?
People might throw out like UBI or something like that. And yeah, okay, that sounds good maybe, but we need something a little bit more fleshed out, something that really walks us through the transition as these things start taking over what is kind of like the playbyplay to start making sure that things just don't go off the rails. I don't think our politicians are capable of navigating us intelligently through this transition.
Most of them are too old. They're not very tech-savvy. they're too short-sighted and probably too entrenched in kind of like the the old views to to be able to think through things from first principles.
Like I don't think a politician is going to be the one that's going to come up with some brilliant idea for how to navigate this. I I I would not bet on that. Let's say that.
So, so far Sam Alman in his blog post uh Moore's law for everything laid out a potential proposition for how things would work which is I I liked a lot of it parts of it. the fact that at Google DeepMind it seems like they're hiring to try to look into this. That's that's very optimistic.
That's very good. That's promising. But I think we need more because as jobs start going away due to these emulate humans or whatever else, people are going to freak out, right?
They're going to think that this is just all bad. There's nothing positive. And I think we as a society, we better have some answers, some plan, at least just a a theory of what the future looks like.
Even if we have just a vision of kind of where we're going, that just in of itself would be probably pretty good moving forward. I've said this many many times on this channel, but I'm I'm very optimistic about the long term of AI in terms of removing scarcity, allowing people to have more abundance, etc. I'm optimistic for the long term.
I'm not too optimistic for the short to medium term, that kind of transition. I think we we humans were not equipped to handle that well as a society. we tend to freak out, panic, we tend to get too emotional, we tend to be a little bit greedy, like this transition is going to really, I think, test us.
Are we going to be able to manage it or are we going to end up in some dystopian future? So, a few days ago, 5 days ago or so, I said we're kind of beginning to enter the singularity and a lot of people are like, nah, that's that's nonsense. Everything's fine.
Nothing is changing. Things are staying just like they have been since then. We've had this market meltdown correction.
We've had this stuff being released. Opus 4. 6 seems like a pretty big leap in in capability.
I I've only played with it just 10 minutes or so just because of the sheer amount of stuff that's been coming out, but it does feel like things are accelerating. So, interesting times ahead. If you're working for any of the companies who are being kind of onboarded with this frontier thing from OpenAI, let me know in the comments.
Have you seen anything? Have you heard anything? If if you're able to talk about it, of course, what does that actual process look like and how long has it been going on?
cuz we're finding about it now, but it's already at many large companies. So, I'm curious. Has this been going out for 6 months, a year?
Let me know in the comments. And if you made it this far, thank you so much for watching.