In my career working in AI, I have yet to meet a single person that ever felt like they had enough compute. >> I could not ask for a better guest, Andrew Angie, globally recognized leader in AI. >> Data centers [music] are the critical infrastructure for building the digital economy. I think that open way models is a tremendous source of geopolitical influence. [music] the work ethic, the Velocity. When China's government makes a all nation commitment, it's all industrial commitment. [music] That's actually very powerful force that I wouldn't underestimate. >> Ready to go. [music] Andrew, I've been
an admirer for a long time, so I've been really looking forward to making this happen. So, thank you. Thank you so much for joining me today. >> No, yeah, thank you Harry. Watch a bunch of shows. I really enjoyed your recent one with my friend Martin Casado as well. That was very memorable. So actually thrilled to be uh here. I love Martin. Very very special man. I want to start with something that you've said before. You said AI is the new electricity and when I think about electricity and where we are today, I want to
understand the bottlenecks and everyone seems to suggest that it really Is about data, compute and algorithms. Is that the three parameters to which we should think about bottlenecks? And if so, which one do you think is the biggest bottleneck? >> I would say the two biggest bottlenecks right now um it may be I I think electricity is one of them. uh so in the US I am honestly worried that many data center operators were stuck in kind of permitting and you know and and I know that local community support is Important and some people don't
want a data center there but um once we build roads and railways as the infrastructure for a certain generation data centers are the critical infrastructure for building the digital economy and so lack of electricity in in America and in number of western countries is a problem and in contrast because I see China building power plants left and right including nuclear. So that will be interesting dynamic and then Semiconductors is another bottleneck. Um but AI is so complicated. I think we also need more data. We also need more um better algorithms. You know all of it
is worth working on but in the short term some constraints with electricity and and semiconductors. >> Can you talk to me about the constraints around semiconductors that you think are most pressing that most people don't realize? first in my career working on AI, I have yet to meet a single AI Person that ever felt like they had enough compute. So, um you know, get us any amount of compute, we will use it all up and say we still don't have enough. So, this is a constraint for the last 20 years or so. But what I'm
seeing is um with the rise of Gen AI, there are very valuable workloads. Uh for example, AI assisted coding, you know, it's fantastic. It's making us so much more productive. But if you use cloud code enough sometimes you get very limited And I find that many companies have really have excess demand which is very rare problem to have but so many people want more OM inference want more tokens generated and we just don't have the semiconductors and the data centers electricity to meet the demand but you know there's a lot we could do with AI
um token generation uh and it's frustrating when we can't when the supply side. We can't supply enough to people that want it. On the demand side, You you get very limited if you if you if you use too much. How should I think about that insatiable need for more compute and the improvements that come from it with the recognition that many people say GPT5 was the example that scaling laws have been reached to certain extent and a focus on efficiency has been a transition. How should I balance those two two seemingly differing opinions? So it
is true that um token generation is getting more Efficient and cheaper. In fact um if you look at OpenAI's uh open weight model the they actually um released models that are very efficient to run. So I think they did a good job with um was it like a 100 20 100 plus 20 billion parameters or something with I think 5.7 billion active. So it's actually a very efficient model to run. Um but despite the cost of token generation falling uh our demand for it is you know insatable. One one interesting thing that's Happened in AI
is if we look at where the buckers of value uh one of the big buckets of value is AI assisted coding and I think this hearkens back to an earlier era in a previous generation. I think Google came to dominate you know horizontal information discovery like web search but there's room for lots of verticals when the internet was being built so we w up with you know travel velocity nextia fought out for travel bunch of folks fought out in retail a Bunch of others fought out in transportation social media and so on what we're seeing
now is um Chad GPT has such a strong consumer brand uh Chad GPD seems to be the dominant player in the new new gen horizontal information discover Discovery although I think Gemini with his channel advantage through control of Android and Chrome you know is a is a serious player as well but if that's where horizontal information turns out to be then there's Still plenty of room for lots of verticals to be built out and one of the clear buckets of really valuable verticals is AI coding assistance where cloud code um is you know I use
that every day love it open AI uh codeex has a lot of momentum as well but it's clearly making developers is so much more productive and efficient that the demand is just through the roof for less let's use more and more of this. One thing I find exciting is I often look at AI coding assistance as a harbinger for what might happen to other job functions as well as the AI marketing tools become more efficient as AI recruiting tools more AI finance tools become more efficient. So I often look at AI coding assistance as a
uh you know maybe a foreshadowing what may happen as well to other sectors as the tools get better for them too. I had Joel Pino from Coher and formerly of Facebook on the show recently and she said that AI coding Assistants are in the same place that maybe image generation was in 2016 2017 in terms of maturity. Do you think that's a fair state of the environment today or do you not think so? >> I don't know. I I think it's further along. I think in 2016 image generation wasn't super valuable. uh I don't remember
it being that valuable back then but I I I think today AI coding assistance is really actually at AI fund my uh head of engineering recently I Would say hey let's think about standardizing on tools and you know basically he said you know I need these tools and you have to pry them out of my code dead hands right I I think I think our developers feel really strongly I I myself I don't ever want to have to code again with our AI coding assistance I think the tools are really working well but still with
a lot of uh headroom for how much better it can get. >> I I do just want to go back to the kind Of the core bottl we said they're about electricity and we said they're about semiconductors. Yeah. I think when we look at like the build out of data centers today as you said regulation has been a big part of preventing that in a lot of ways. Do you think Trump has done more to help or to hurt the progression of AI in the United States from an infrastructure perspective >> over the last few
years? The US federal government has done some good things and Some less helpful things. Um I feel like clearing out unnecessary regulations has been a very good move. Um even last year the uh bipartisan Schumer AI insight forum I I think there are a lot of people lobbying the US government to pass siphoning regulations. You know there are a lot of hyped up AI safety narrative saying AI could lead to human extinction which is kind of ridiculous statement. um uh to try to get stifling anti-competitive regulations passed Often to try to shut down open source
open weight. Fortunately, would beat back a lot of that. But I think the bipartisan stream inside forum did a really good job digging into the truth and concluding that America should be investing in AI rather than, you know, passing unnecessary regulations to slow it down. Um I think Trump uh did a good job um uh and and then his whole team David Sax and Christian and so on did a good job clearing out unnecessary Regulations. Uh on the flip side, one of America's huge competitive advantages has been this ability to attract talent uh including high
school talent as well as you know frankly young talent that may not currently be high skill but could be high school in the future. And so I think um to the extent that America is uh uh uh not investing as much in attracting talent, I think that would be an unforced error. Um and then I think lastly, you know, investments in Science, right? I think uh helping our institutions of higher education uh have the resources to train our grad students to invest in scientific technology. I think that's really precious. And so anything that damages that
I think would also be very unfortunate. >> If I gave you a regulatory magic wand, Andrew, what would you change that would have the most significant needle moving impact? >> America is fortunate to have a lot of Very smart people wanting to come here uh to do really challenging really tough problems. Um many of our low laures are immigrants. of you know Einstein can example was an immigrant. I think continuing to cultivate America um as a place to attract great talent to work together in in a place in a in a democratic nation that that
respects the rule of law. I think that would uh help us move ahead. Um I think that uh securing the semiconductor supply chain Would be very valuable as well. a lot of friends in Taiwan. I love Taiwan. Uh and also um America's dependency on TSMC uh is concerning in case anything happens. Um and then I think making and and and frankly there's one very funny thing that happened in society which is um there was recently a pure report showing I think how much America's you know uh uh think AI would be good for them enthusiastic
for it versus not Enthusiastic and even though a lot of AI technologies were invented in America um a lot of people don't trust or don't like AI the joys of what I do Andrew is I get to speak to incredible people and then kind of cross reference what they say. You know, David Khn from Sequoia said, "Hey, a really useful barometer for effectiveness is can AI replace the bottom 5% of capabilities of what a workforce does." Joel from Coher said, "No, that's crap." Uh, it the real Question is, can it 10x people's ability? Forget the
bottom 5%. Can it 10x? How do you think about a barometer for success of the workforce with AI with those in mind? >> In the case of software engineering, it is accelerating the writing of code. Uh there are so many projects that used to take, you know, six engineers half a year to build that today I or one of my engineers can build in a weekend. I hope that we never have to go back to coding With our AI assistance again because the acceleration the predatory bruise is is in is is incredible. I for example
one weekend I thought oh I want to you know uh my daughter I wanted flash cards for my daughter to practice multiplication and she wanted to practice multiplication. She wanted flash cards. So I thought I could either drive to the store and buy a bunch of flash cards for her or I could just, you know, use AI to write code for me to generate and print Out a bunch of flash cards. And so I did the latter. And so this is a very low economic value to AI assisted coding. I could get that done uh
uh very quickly. Do you think vibe coding is an enduring market? Like, do you think everyone will want to code and accessibility is important or do you think it bluntly just allows builders to build better and more efficiently? >> I think we need all of the above. Um, you know, I I've had mixed feelings About the term vibe coding, but nitpicking terminology aside, I think everyone should learn to code. Uh what I'm seeing is for a lot of job roles that aren't just software engineering, people that can code can get more done than people that
can't code. For example, I think um my one my um my marketer wanted to run a user survey once and uh she wanted you know something for people to give live feedback and she looked the app store Couldn't find anything. So she said you know what I'm going to spend two days to code up. It take took a two days, but my uh uh marketer then built a little mobile app where users could swipe, you know, left or right to give feedback on some marketing messages we wanted to use to test. And because of that,
we're able to run user experiments, get feedback, and so it helped her do her job better as a marketer. Whereas in contrast, a marketer that couldn't code a little app To, you know, let people swipe around and give feedback, they would just not have been able to do this, would not have gotten feedback, would not have been able to move forward. Today, my best recruiters, not only did they screen resumes by hand, they are writing prompts to get AI to help them screen resumes. Um, and it's been interesting. We >> which is amazing. But going
to your point on like, oh, people shouldn't be Fearful and they are fearful. you you under you see that that would lead to efficiency gains which mean headcount reductions if you can screen so much more with AI I'm not into this kind of fear-mongering so but like if you can screen a lot more with AI I don't need my three other analysts I think there's a small subset of jobs that you know frankly are in trouble uh but I think for the vast majority of knowledge workers uh actually here's one thing About hype Um, AI
is amazing. There's a lot of stuff it can't do. So, this phantom AGI someday where AI can do everything a human can do. I think we're very far away from that. I would say no, like decades away, maybe even longer. And the trick is if AI could do 30% of a recruiter's job, you know, who knows, maybe 50%, although that feels a little bit high. There's another like 50 to 70% of stuff that we still need the human to Do. But it's also clear that if you use AI and someone doesn't, that's actually a huge
difference in what you can accomplish. So better, you know, much better off using AI. But because AI can't do everything, there's still plenty of work that we still need humans to do for a lot of job roles. Do you not think we have a white collar talent pipeline problem though, which is whether you're a consultant or you're a legal associate in the junior ranks, a Lot of what you can do is being replaced by AI. and they are actually cutting juniors. You're seeing this across the board. And so what's the fear is we're going to
have this talent hole where in 10 years time there's no juniors to go up into seniors because we've replaced them. >> Yeah, I don't think it's as dire as that. I think there is a big problem, but I don't think it's exactly that problem. So let me tell you what I'm Seeing in software engineering. >> The most productive engineers I know, they're they're you not fresh college grads. They are people have you know 10 20 years of experience or whatever and really on top of AI and know the AI tools and understand AI. So those
people experience and on top of AI move faster than anything the world has seen even one or two years ago. One tier down is actually a fresh college grads that are really on top of AI. So I've hired, you Know, quite a few people, fresh college grads that for whatever reason through the social network community really learn the AI tools and they move really fast, but they're not as good as people experience that no AI. One tier down from the fresh college grads is the people with 10 years of coding experience, but who had a
comfortable job and for whatever reason is still coding like it's 2022 before Chad GPT. Those people I I just don't hire people Like that anymore. But there are people that you know the comfortable job they kept coding the old way and they just did not learn AI. I think those people may get into trouble at some point. And then >> oh but but there's one other one which is the the tier that is in trouble which is um the fresh college grads that don't know AI. One unfortunate thing is um university curricular is slow to
change. Um and so I actually feel pretty bad That even today there are you know universities graduating CS undergrads that have not made a single call to a single um API on the internet right and imagine imagine graduating a CS undergrad that has never heard of cloud computing it's like what is a cloud oh I don't need to just run things that that's that's weird you just can't be a CS major and not know how to do things on the cloud and I'm feel I'm getting a point where I don't think it's right I I
I feel like we've got to not train CS majors um without also making sure they know how to use AI to help them with coding without also making them know the AI building blocks. But university care and that's that's a cohort of students that entering the job market that's really struggling. But the fresh college grads know AI, we can't find enough of them. So many businesses love to hire those fresh college grads. >> I just want to touch on the the 10x 100x Engineers that you said are just amazing. Amazing. We're seeing pay packets, compensation
brands, larger than they've ever been, you know, $ three and a half billion dollars in certain cases for an a single engineer. Are these justified pay packages given the impact that they are having on companies enterprise value or is this bubble like pay packages that we should be concerned by? >> I don't know. It is really hard to say. I I know a number of people that have gotten really huge pay packets. I'm actually very happy for them. I think it's great the funding going into, you know, pay AI people really well. Um, >> I
mean it nicely. Do you think it's like a hundred million for an engineer? I worry that you're just not going to be as productive. Like if I give you $100 million overnight, god, you might buy a nice house and go on holiday and, you know, you lose a bit of efficiency. >> I don't know. I have a lot of Silicon Valley friends that, you know, for whatever reason, have made a little bit of money. many of them just keep working really really hard. Uh uh but equally before and after you know they wound up making
a little bit of money. So I I I find that uh a lot of the tech culture we do stuff because it's fun because it lets us you know hopefully help other people is a way to change the world. I I I find that uh wealth makes people Become lazy much less than one might guess. >> I'm intrigued to see how you think about this. You said all the different ways that it could impact many different verticals there. And you said we we overhype, you know, uh doomsday scenarios and everything in between. You Andre Capathy
recently said AGI will just blend into 2% GDP growth, which I thought sounded a little bit unexciting to be honest, Andrew. I I I wanted some Seismic shift in productivity increase. Do you think a a blend into 2% GDP growth is what you expect or do you expect a much more significant 5 6% like mass at Soft Bank expects? >> I hope we can get much closer to five six or more percent GDP growth. Um when looking to the future it turns out one of the most expensive things in today's world is intelligence. This is
why it's so expensive, at least in the US, to hire a highly skilled doctor to Advise us on the medical condition or hire a highly skilled tutor to, you know, patiently teach our kids because that intelligence training up that wise doctor, wise teacher, wise adviser is very expensive. But with AI, we finally have a path to make intelligence cheap. And so in the future, if everyone can be assisted by an army of smart, well-informed staff on all of these topics under the sun that currently only the relatively wealthy in society can Afford to hire people
for then individuals will be so much more empowered and able to get so much more done and that highly empowered individual, you know, lives will be so different and the GP growth will be will be will be massive. Totally get that and agree. Kind of speaking about that democratization of knowledge there and the benefits that come from it. You said a word before which was open about kind of the open Weights ecosystem that we've seen. We've seen this reversion back to like a closed world in a lot of cases. How do you feel about the
reversion back to a closed and how do you analyze the state of play today in that open versus closed? >> It's still very dynamic. Um I think it is uh so for a lot of American companies the leading frontier model is often kept close and then the one tier down model not quite as good as release as open. I Think it's much better than nothing. I'm actually grateful for all the teams that releasing open-source openweight models. And then the other dynamic is uh uh China especially has been really taking the lead uh or well taking
a lead or getting up there in terms of releasing tons of really good um open way models. So I would say if not is kind of um not what I would have predicted you know a decade ago that China AI would end up being more open than America AI. uh and I think >> why do you think China is wanting an open AI world? >> It turns out that openness is great for a country's um development. So it turns out that uh when a team releases open source software circulation of knowledge is much faster to
the close by community. And so what what I see is when when a team in China releases an open way model, then yes, of course, American can take advantage of it. But The China economy benefits even more from it because once something is open, it's easier for teams to call each other and say, "Hey buddy, how does this really work? I'm having trouble this part of model." It it just that circulation of knowledge is really valuable for innovation. And when the US um has more closed models and when you know teams are trying to pay
these hundred million dollar salaries to extract talent to then that circulation Of knowledge becomes very slow and it slows down the rate of American and and European innovation. With the commoditization of the model layer though and the kind of opening of it, it actually increases the premium on manufacturing and the ability to manufacture at scale which China have a much greater ability to do than the US. Do you not think that actually leads a lot of their thinking around why they want to remove the strength of US Models? In addition to um increase innovation and
circulation of knowledge which the open way models helps with um I think that open way models is a tremendous source of geopolitical influence. >> So for example if um someday you know some kid in some developing nation um asks a question about a politically sensitive topic or asks hey where are the national borders in this case or what is the history of this event or That event? the n the country of origin of the model they end up using will be delivering some answer and you know whether the answer is skewed towards one nation's values
or another nation's values is actually tremendous source of influence and soft power like it or not open weight models are a key part of the AI supply chain and um uh China releasing you know free uh uh low cost or three models into that key part supply chain Means is really starting to build up a lead right in and build up a commanding user base and that too will be a source of and this is why I think nations um with a strong media and entertainment industry. It turns out South Korea has vastly proportionate disproportionate
influence because of their leading entertainment industry. So people listen to whatever you know K-pop or whatever and that buys the nation a lot of influence. Hollywood was a tremendous Source of soft power for America. It paints a certain vision of the American dream. Uh talks about the values of freedom and democracy and I think this is another frontier of communications and soft power. you have the most fascinating perspective having obviously spent many years at Google and then obviously BU as well and so having been on both sides of the table in certain respects we we
have this kind of strange binary polarization of the AI race China Versus the US do you agree with that positioning of China versus the US in an AI race >> I think there's a lot of room for um cooperation and then also some places that will be competitive. So first um while people sometimes even me talk about the AI race, there's no single fish in the finish line. It's not one race is AI is a general purpose technology and you could be better or worse at coding, better words at Answering questions, better words at helping
with you know marketers and finance and so on. So AI has many different capabilities and there's no one finish line and email is one capability I think we're going to keep on improving for a long time. So I I I feel like because of uh PR goes AGI has been hyped up as it was a finish line but I don't think it's a finish line it's just we have continually improving capabilities for you know decades to Come having said that nations with stronger AI capabilities are going to be more powerful uh the citizens will be
more prosperous economies will grow faster uh so I think there is so to the extent that different nations incentives are not aligned nations with more powerful for AI capabilities will be able to do more just like you know if if a country has a fantastic electricity grid and another country you know has power outages and so on well one country Can just use the electricity grid to do more manufacturing more industrial work just do a lot more that way do you not think we still underestimate China's ability though I mean I think we definitely do
in Europe but I think in the US respectfully I see a lot of US arrogance around your positioning and then you go to China and you've been to China and spent huge amounts of time in China you realize the speed and the intensity with which they move. It's a Different level to both Europe and the US. >> Yeah. Just just Yeah. To be fair, I think US, Europe, China all have problems as well. Uh but having said that I think the work ethic the velocity um when uh China's government makes a all nation commitment is
all industrial commitment that's actually a very powerful force with kind of state level investments in semiconductors in it education system so you know like K12 Kids being trained to use AI businesses also use AI share knowledge and then sometimes you know build this stuff and also sell internationally with state apparatus that may or may not be the hope that's actually very and and then uh control over um uh uh rare earth uh elements uh so I think that whole of economy whole of country efforts is actually a very powerful force that I wouldn't underestimate >>
given that like we shouldn't Underestimate it do you think it's right that we have export controls on chips you know obviously Nvidia has had a lot of export controls back and forth do you think that's right or not >> I think the export control on chips has largely backfired. Um the way that the US and and I think that the way the US first put restrictions on Huawei uh uh and then later on you know export of Nvidia and AMD and and other semiconductors that really incentivized China. So before the export controls semiconductor development in
China it was not frankly it wasn't moving that fast. So it was a nice area. There was some investment but when America did that then China really accelerated its semiconductor development and so America incentivized China to do this and it is paying off for China. Um I think uh uh you know a number of Chinese companies are building offerings that um the individual chips are less powerful but Maybe of much larger number of chips trying to build offerings competitive with certainly the last generation of Nvidia maybe increasingly the current generation. So I think um if
I were to analyze just purely you know US national self-interest uh I think that cause China to accelerate a semiconductor industry in a way that may not be helpful to the US long term. I sit in Europe you obviously live in London. You told me you were born in London before This. My question to you is it transparently feels like we are very far behind and people say you've already lost. How do you feel about Europe's position in a very new world and what can Europe do to regain some semblance of equality between the US
and China? If I had one wish for the European regulators, spoke with quite a few European regulators. I was hearing things like we want to be leaders in regulating AI and that's a competitive Advantage and with all due respect that's not a competitive advantage. So my one wish for Europe is uh stop regulating so much and just focus on investing and building. The thing is it's so early in the days of AI. Um it's still early in in in in the game and Europe has plenty of smart people. Uh let people work hard. Don't force
them to not work hard. Let people that want to work hard work work hard and stop overregulating And just go and invest and build stuff. >> Where do we most need to be investing where we are not investing enough? >> There's tons of capital going into data centers and infra. Uh we can debate is there a bubble or not? We definitely need a lot of investments. Are we you know getting to the point where people are using such esoteric financial instruments to find cash for it that there'll be a bubble? We could debate that, right?
So we definitely need a lot Of investments, but when does become overinvestment? That that's a that's an interesting question. The other place that I think we need to invest in a lot is not just the infra data center foundation model layer, but the application layer because of others having spent, you know, billions of dollars to train these AI models, we can now access them for, you know, hundreds of dollars or thousands of dollars or whatever for tens of dollars. So it's Wonderful to build tons of applications that just were not possible before. Now from a
VC investment perspective, I've heard from multiple VCs is um bizarrely the cost of trying something out is so low that there are fewer ideas. It's not quite sure where to put massive amounts of capital to work at the application layer. In fact, if you look a lot of the um uh uh application layer investments, sometimes it feels like, you know, firms are putting in $100 million so that they Can pay open anthropic so the open anthropic can pay Nvidia, which is where all the money is is ending up. Having said that, there's so many valuable
bets to be placed at the application layer to just build stuff. Uh but the dilemma is you could do it in a very capital efficient way. So if someone wants to say I want to put $10 billion to work, you know, yes, you can build $10 billion worth of data centers. We know how to spend that money, but how do you spend $10 billion in building applications? The the the problem is almost it only cost me a million dollars to try an idea. Oh, so how do I spend $10 billion? It's kind of a problem
and also not a problem, but I think we should. >> Well, does it? Because when you look at AI margins, what margins for AI application layer companies, they're terrible. They make no money. They cost a lot of money to build because you have large engineering teams that build them. They don't they cost more, not less. >> I think it still varies. Um I'm seeing a lot of green shoots of software applications that um were not that expensive to build and if your Elm token usage is not, you know, the majority of your expense. If you
look at a rapid or a lovable, 80% of their pass through is to anthropic. Mhm. >> Yeah. So the dynamic that I'm excited about is um uh it turns out I think and anthropic I think that as Elm token costs continue to come down uh we'll see how the economics change right now token is is just expensive but hopefully that will change and the value created is really large actually you know I I remember earlier era in early days of food delivery for example I saw this in both the US and China. There's a lot
of VC subsidized eating, right? You know, it was great. We could we could eat food delivered. It was basically VC Subsidized. Um I think we're seeing that right now with a lot of VC subsidized, you know, AI clothing. Uh the laws of physics or the laws of finance says that at some point, right, this can't go on forever. uh but where it settles down will be that I think there will be some very valuable businesses that are not perpetually VC subsidized but navigating this crazy VC subsidy world to get to a good outcome takes a
lot of skill but having said that I still want to say the Lot of um smaller applications that are not yet doing these you know hundreds of millions of dollars maybe they're doing millions of dollars or tens of millions of dollars of revenue that haven't been quite expensive to build and to operate and that I think we'll um we'll see continue to grow. >> Speaking of kind of the smaller niches so to speak there that continue to grow. How do you think about the question of you mentioned earlier brilliantly that Articulation of kind of horizontal
uh and then the verticals beneath them and Google and now open AAI being the horizontal. How do you think about the the question of a world of large monolithic models versus much smaller, much more efficient, much more specialized models? How do you think about that? And has your mindset changed around which will be more dominant? >> I think it's clear it'll be all of the above. We we will have large models and Midsize models and tiny small models. And the reason I'm confident about that is because um the nature of intelligence is diverse, right? Sometimes
we do intellectually really easy tasks like if someone asked me I know when when like all right yesterday my daughter um well she misspelled the word butterfly so I need to tell her how to spell butterfly is a low you know it's an easy intellectual task and sometimes I'm sitting down thinking for hours about Some you know complex like technical problem right and that's that's really hard and so intelligence has a range of things we want to do and so the set of things we want AI to do has a huge range. If you want
AI to do basic grammar checking and spellch checkcking, you don't need a trillion parameter model just a tiny model maybe running locally just do that. But if you wanted to do complex reasoning to write a piece of code, then yes, having a powerful Model is going to do better. And so um I'm actually very confident we'll end up with a huge range of models small and large to do the huge range of tasks just just like we have humans do a range of tasks of difficulty. Same with AI. >> Does that mean that you disagree
with Andre Kapathy when he said that uh agents useful agents importantly useful agents are a decade away? >> I disagree with that. I think we're seeing useful agentic workflows right Now. Um so AI fun our team has built so many agentic workflows uh for so many tasks where you know we just could not even do the tasks but for agentic workflow. >> Can you give me an example? I'm fascinated. >> Over a year ago, we thought that uh this is actually um uh after uh after one of the Biden Trump debates, you know, for better
or worse, we thought that tariff compliance may become an issue. Maybe Unfortunately, we turned out to be right. But so last year, I think it was around August, we started building started exploring building technology to help with tariff compliance. And by the way, I don't know if you've seen these tariff compliance docs, but frankly, when I look at what it takes to file these this paperwork, it's like it it makes me want to, oh my god, what is this? So, you know, you you say import a bicycle, you know, then you look at the Specs
of the bicycle, um how much does it cost, the size of the wheels, there all these rules and regulations to like import a bicycle. It just it just makes me go, "Oh my god, are humans really doing this?" So we built agentic workflows um to read deter compliance documents carefully get the spec for what someone wants to import carefully try to match make suggestions and uh so this is now one of our portfolio companies called guy ga dynamics that You know because of the increased complexity in tariff compliance has been doing pretty well right uh
and and and so I find that we just could not have done this without agentic workflows with medical assistance We have um different startups, AI fund for federal companies uh medical assistant uh uh operating in India uh AI assistant uh Katus uh you know helping process legal documents many of these workflows we just could not do um so I I I find that there are Useful AI agentic workflows already today and the large businesses too not just our startups when we look at the hyperscalers and I chat to friends and so the large businesses there's
a bunch of internal workflows that you know we just could not be doing without these AI agents. >> When we think about a core of a business, it's margins and most of these business don't have margins. Do you care about margins when investing today or With absolute respect and it sounds disrespectful? Do you take the kind of utopian view that it will just correct itself with time and with efficiency gains? at some point uh the laws of physics I think or the laws of finance or something margins do matter but one of the tricky things
about AI is um we know the technology is going to change so we don't build assuming the technology will be stagnant we do build assuming the technology will evolve so One one obvious one um token prices have been rapidly falling right uh depend on who you believe falling 80% year whatever kind of frankly when we built prototypes we routinely just not worry about token costs because the first most important thing is let's build a product that users love. Um, and then what we find is after we've built some, this actually happened to me a few
times now. We'll build something and not worry about the Cost and then you know users start to use it and then our API bill starts climbing and then it is really like kind of you look at this every few weeks and go wa this is getting really expensive. This costing me salary of one engineer cost me more than two engineers cost me more than a whole bunch of engineers. All right. Does but fortunately when that has happened um almost every time so far we've been able to use techniques to bend the cost curve back down
even Faster than the rate at which token prices are falling in the market. And so I find that uh absolute margins are important but when you have a view for where the technology is going then it lets you not build for the margins today but what you can forecast them being in the future and I think that's an important distinction but we don't take a blind utopian you know AGI blah blah blah view either. I think that's also overly simplistic. >> How do you think about defensibility in an AI world? A lot of people suggest
that the time to copy is reduced significantly. um that defensibility itself is questioned in AI. Do you agree with that and the questioning of defensibility today or not? Moes are changing. Um so I find that moes tend to be a function of the industry rather than a function of the technology. So AI as a technology doesn't really offer an answer to the moes for most businesses. So if you're building AI you know for drones or legal or for whatever the mode is more of a function of that industry. Um but one thing that is changing
with regard to modes is previously software used to be a mode right if you had you know invested 10 years to build a software it's really hard to replicate that that one mode is much weaker than before but other modes like are you trying to use AI to accelerate to build a two-sided marketplace which can be Very defensible or you know are you building for a consumer more for consumer enterprise are there on brand and reputational effects right that can help you build defensibility there. So I find that um the software mode has changed but
other modes tend to be analysis based on the industry. Okay. So software modes have changed. Fantastic. And so we now have margins that matter but we have a little bit more elasticity there. The software mode has changed in Terms of like the ability to stay relevant for large enterprises. What are the single biggest barriers that are preventing large enterprises from implementing AI aggressively and prevent themselves being extinct? >> I think the biggest barrier in most large enterprises is is actually um people and change management. Um >> not data. >> It's not data. I I think
it's definitely not data. Not that data is not Important, but that's definitely not the bottleneck. I think data has been data. So you the the interesting thing about AI hype is um there's almost always a gem of truth in the hype. It's just that it's been hyped up you know 10 times more than the reality. And maybe actually let me give one example then I'll come back to data. Um there's been this buzz about oh with AI we'll have unicorns with one employee right it's like a thing and and it's fine if you Want to
build a unicorn startup billion dollar with one employee go for it. It's a good thing to do, but frankly, if you're doing a if you're a billion dollar valuation, you could afford to pay two employees or even 10. So why do you need to hype it all the way up to say let's do this with just one employee, right? So that so it is true that team sizes are shrinking and get more done with smaller teams. So that is true, but the hype is then saying let's Build a unicorn with one startup. I find a
lot of AI hype. It's so hard to disentangle because there's a gem of truth in it. It's just been hyped up a lot more. It's on data. Data is important. Um, but it turns out that data is very verticalized and you don't need as much of it to get started as you want as you think. Um, so for example, you know, I don't know, landing AI does a lot of work with financial institutions, um, healthcare, lot of Financial institutions have plenty of transaction data. you know, take the PDF file, turn it into earn ready markdown
text, go process that, find value in that. Like for example, I don't know, we could uh take a SEC filings, large complex financial tables, very accurately turn those financial tables into Excel spreadsheets, then go get your analysts or your AI to analyze that and draw conclusions. So you could do that so often with bit of scrainess. Look at internal data, look at your public data, you can often get some stuff going. And it turns out that um a lot of internet data is kind of general purpose data. Most of the world's data is actually private,
right? Uh and as a lot of business is actually very valuable transaction data, you know, sales data, product data, manufacturing data, logistics data, and all that data with a scrappy team that knows how to use it can actually start to build Something and get value out of it. Not to say more data wouldn't be even better, but you're not stuck to even take the first few steps or lack the data. Andrew, I speak to many CEOs of these size businesses and they say, "Harry, are you kidding me? You think we can get security and permissioning
for our data and our enterprise? No. We don't have Slack. We don't have Notion. Everything is customuilt. You You're seeing the likes of JP Morgan, Goldman Sachs absolutely refuse any chat GPT use. Building internal systems. Is that the world that we inhabit for enterprise AI adoption? >> I think we'll get there. Um uh so I find that a lot of enterprises are adopting OM you know PIGB and many others. Um I think today there's still businesses that are still on prem rather than on the cloud. >> But we're making progress and it'll take actually one
thing about AI this hype That we have AGI in two years or whatever. I think that's just ridiculous. That's just I I I for most reasonable definitions of AGI that's just not going to happen. And just as how long are we now into the cloud era, but we still have an awful lot of on-prem jobs. Um I think that AI adoption it will be wonderful. There will be tremendous GPD growth is also going to take much longer than the hype says it will. I I actually think that a Decade from now we will still be
working to identify valuable applications in enterprises and building them. Having said that, we will make a lot of progress over the next one or two years, but we're not going to be done, you know, even 10 years from now. >> What else does everyone think they know about AI and its adoption and implementation that they get wrong? >> Even earlier this year, we saw some senior business leaders advise people to Not learn to code on the grounds that AI will automate it. We'll look back on that as some of the worst career advice ever given.
As coding becomes easier with AI assisting us, a lot more people should learn to code, not fewer. And I'm already seeing I mentioned the market example just now with building an app for feedback, swiping. But I think um for a lot of job functions, people that know how to tell a computer exactly what they wanted to do so the computer can do It for you, they'll just be more powerful. And for the foreseeable future, the language of precisely telling computers what you wanted to do is coding. It doesn't mean you should write code by hand.
Writing code by hand is becoming obsolete, right? Really don't do that. Not but but get AI to write code for you and people can do that would be more effective and more powerful and have more fun. If we're that early where a decade's time we're Still going to be looking and identifying areas where it can improve meaningfully, do we have enough money to fund both the energy and the compute requirements for that 10year period? Sam Alman has said he needs a trillion dollars. He needs the energy of Japan. If we're 10 years out before we
have still not that much improvement, do we have the money to fund it? I think we'll see plenty of improvement over the next two years. Uh but I think we still Won't be done getting even more improvements 10 years from now. One place where super promising is AI assisted coding. So we're seeing real productivity gains, real returns. It's really changing the way software is written. It's really been fantastic. Frankly, so many of my friends is coding is so much more fun with AI to help us out with all. So we are seeing returns just to
be clear but we still won't be done growing this you know 10 years from Now >> but if you look at the TAM the the secret to success in AI investing is will we see a transition from software budgets to human from human labor budgets sorry to software budgets and if we have that then holy grail me and you will make a lot of money with our funds and fantastic news because the TAMs have massively increased or the spends massively increased if we're like a we're not going to actually lose any People then actually we
don't see that transition from human labor budget to software. Do you think we won't see that transition? >> So to me the question is um is AI mostly for cost savings or is it for growth? And I know that you know it's difficult to change workflows. A lot of companies tend to think cost savings. But maybe here here's the problem. There's actually one pattern I see. Let's say I have a work toss that has you know like Five steps right and let's say each step takes 20% of my effort like uh maybe I'm um underwriting
approvals you know do I approve this loan or not right so let's say let's say for simplicity of five steps each takes 20% of my effort if you can ultimate one of those steps is a 20% cost savings which is really nice you know it could be great if you're low margin business but it doesn't feel like a game changer so what I find is that the more valuable users of AI It Actually requires it often requires rethinking that workflow. And the pattern I see is instead of taking a 20% cost savings, which you could
do, that's fine. Nothing wrong with that. The the two patterns to then getting growth is um is either do more or do it faster. [clears throat] So in the case of underwriting, making loans, um if instead of saving 20% of my human labor, if I can now rework the workflow to turn around my decision-m Time. So instead of someone needing to wait you know two weeks before a loan officer looks at it but we can just give you an initial answer in 10 minutes that changes the product and let you drive growth. So that's a
faster pattern and then there's also the more pattern. So another example um there are a lot of uh I don't know businesses that could do um hightouch you know say customer service only for expensive high-end clients right but if you can now serve a much Larger group of people or let's say financial advice instead of giving high touch financial advice to small group of people if you can now deliver that quality of service a lot more people then that again changes a product and lets you drive growth so instead of cost savings if AI lets
you do do something way faster or lets you take a toss and do it a thousand times more. Instead of serving a small number of people, let's serve a lot more people because it's now Economic to do so. These are the two patterns I've seen to drive value increases and I think that will be important for unlocking lots of this GP growth. You said economic to do so. Do you think it's crucial that we see vertical ownership in terms of see Nvidia own models as well as chip layer and you we're seeing Facebook build out
data centers more than anyone. We're seeing everyone build out data centers. Is it important that we own every layer Of the stack or actually will we see individual participants own horizontal layers of the stack? >> I think does it evolve over time? Um I'm going to make an analogy in the early days of um say the computing industry it was the vertical players that won because you know if you want to connect the keyboard to your computer motherboard which is a CPU is it okay if your keyboard has a you know plus minus 5 volts
and your CPU has some other Voltage is that okay or not so we didn't know where are the API boundaries or um if your uh CPU has uh your memory laid out a certain way and compute and your your math accelerator they needed to interoperate with each other. So before we wound up having a clear conception of where to draw lines and whether the API boundaries the integrated players IBM back in the day could solve all the problems and and you know build build valuable working products. But as the Industry mature we started to have
standards like for example now we have a USB standard. Before there were other standards. So now you make a computer, someone else makes a keyboard, we plug them together and it all works. So when an industry is immature, it turns out where to draw those boundaries to let different participants do their part and have it still interoperate. That's less clear. But then as the industry matures and there, you know, more standards for Uh if I want to publish a compressed OM model on the internet, what's the file format for that? You know, it's time to
see more standards. then that makes it easier for kind of individual players to do something and still have it fit into the broader ecosystem. >> So do you think then like Zach and Sam are right to be spending as much they are on data centers or should they be patient and wait for the maturation of the industry where they can then be Horizontal? >> I think clearly open eyes investments have have right paid off uh uh to date. uh it is possible to overinvest at some point but I don't know if uh that is the
point and then I think also the uh financial instruments being you know used by many players uh to shift risk around have been really interesting there are I find that uh overly used complex use of financial instruments to shift risk is sometimes you know Increases the risk of there being a bubble at some point so that's something to watch out for. Do you worry about the circular deals? >> It's something to keep an eye on. I'm not alarmed by them, but it is, you know, I think things could be more frothy or less frothy. Things
could be more of a bubble and less of a bubble. And these are signs of things feeling a little bit more bubble-ish. >> When does a sign turn into a big concern For you with these? >> I think uh you you mentioned the Squire article on the $600 billion problem of AI. Um I am concerned about that but but it's interesting my my my concern for different layers of the stack is different. So what I'm seeing is for the application layer there is very clear ROI. I think it's fantastic. So someone else trained these models
who can build applications for you know $100,000 or a million dollars and not generating ROI And then I think it is uh calibrating to the right level of infrastructure investment that is tricky. But having said that, it is also at the same time very clear that we do need more electricity, more data centers and more semiconductors. That too is very clear. So we should be investing a lot. Uh uh and I'm glad we are. Uh but what exactly is the right amount to invest? I think that's the tricky question. It should be a lot though.
>> Do you get annoyed by the bubble discussion? >> I don't get annoyed by the bubble discussion. Um I do get annoyed by the hype. I don't know when regulators are calling me up and saying hey we heard AI could lead to human extinction thankfully much less of that now than a couple years ago then kind like you know and then then instead the conversation should be how can we upskill the workforce where can we invest you know Not like how do we slow this thing down I think the hype has really distorted public perception
of AI oh and and and one downside to the hype too is um without public support of AI things slow down so for example actually uh one of my friend friends works a lot with high school students and he told me that he was talking to a girl uh a high school student that was uh that he was talking to her about maybe pursuing a career in AI and she said you know what I heard AI Could have something to do human extinction I don't want to have anything to do with that and so this
hype turned a high school girl away from working on AI at a time where it'd be so promising for them to leap into AI and I think this really causes people to make weird decisions both at the individual school student level as well as at the community level where um when a community you know shuts down building out a data center they could be good for The community and good for the world. I think that's also unfortunate. I'd love to move to a quick fire round where I say a short statement but but kind of
staying on that thread because the first question is what's your biggest advice to educational institutions to make sure they equip students for a generation of AI? Embrace it, update curriculara, teach them as much AI as possible. Uh students are going to live in a world where they will be using AI and having It help them. Um got to teach students to do that. I think it'll be different for different fields, but one thing that is clear is get all your students to learn to code. >> What's one thing you've changed your mind about AI in
the last 12 to 18 months? >> I think my favorite tools keep changing. Uh if you ask me, you know, every 3 months over the last year, what my favorite coding tool is, my answer would Have kept on changing. >> Do you think Anthropic will beat OpenAI in the coding wars? >> Really hard to say. So open app has a very strong consumer brand and that's very defensible. In contrast, uh developers are more likely to switch coding tools on a dime. So I love cloud code. I think it's fantastic, but I find myself using OpenAI
uh codec much more over the last month. Um I think OpenAI Codeex has actually gained real Momentum. And then I'm also keeping an eye on Gemini CLI which I think is also uh getting better maybe at a faster rate than people have given them credit for. So the coding dev tools and API tools market the mode is weaker than having a strong consumer brand. So I think that's something that uh uh you know companies have to sort out. >> Tell me what was your biggest takeaway from BU? It's such a different company to anything that
we're used to in the West. What was your biggest takeaway? I really appreciated the speed and intens and intensity of BU and also um of the China ecosystem. I think it's really unfortunate that in some parts of the United States advising someone to work hard, you know, is viewed as politically incorrect or something. Um >> in in Europe, I'm chastised for it. >> Oh, okay. All right, great. Hopefully the European viewers won't hate me or hate us both for that. I think frankly I I wish you know people could work 4 hours a week and
and be wildly successful but the practical reality is when people work hard they get more done. Now I want to acknowledge that not everyone in every point of their life is in a position to work hard. So you know the week after my kids were born I didn't work that hard. I took time off spend time with the kids right for more than a week. But and I think we need to respect people in all walks of life Including people that for whatever reason are not in a position to work hard at that moment. But
if someone wants to work hard go you know quote Steve Jobs make a dent in the universe let's empower them and celebrate that. If someone for whatever situation can't work hard let's also right respect that and maybe celebrate that. But I think uh uh this is a moment in time where there's so much stuff we could build. People that work hard to learn a lot and Build things will accomplish a lot. >> Did you do 996? >> The term 996 wasn't an explicit term that I use. Uh I find that right these days I work
you know I just I I I I really love what I do. really doesn't feel like work, but you know, I I would on a lot of my weekends, I'm sitting in a coffee shop coding away cuz it's the most fun thing I could do on a Saturday. Uh so I I I actually don't bother to keep track of my hours. I It's probably A lot. >> What's the hardest transition element moving from operator to investor? Oh, one thing about AI fund, yes, [clears throat] we call ourselves a fund, but uh frankly the way we
run the fund day-to-day, we act much more like operators than investors. Uh AI funds venture studio and I believe our skill set is actually in building not just in you know capital asset allocation or whatever. So we um work really hard to Screen ideas. We talk to customers, you know, I'm sometimes on customer calls myself. Uh uh and then we bring in founders to work alongside us. We're reviewing the product, giving feedback on the product, arguing about pricing. So where my day-to-day life is much more operator and yes, eventually we have to do the financial
diligence and I write a check and do follow on. We do all that, but lot more. >> I'm I'm really sorry, Andrew. Then are You a fund or are you an incubator? So um we call ourselves a venture studio or a venture builder. Um the term we we don't incubators usually bring in founders that already have an idea. We go earlier than that. We often work with our investors and partners to come with an idea and only after we have an idea then we go and try to find the best founder to co-uild to co-ound
the company with us. So we don't call ourselves any research. How much Ownership do you have then when you make those original investments and seed the company? >> Uh depends. We end up with some common stock uh for the sweat equity of building the the company and then we usually uh our first check in is usually at like a million dollars at a $4 million cap. Um uh so kind of 20% ownership or safe. >> And so we're basically getting 20 to 25% ownership on entry with a couple of Common. >> Yeah. plus some common
for the sweat equity. >> Totally get you. What do you think is the biggest >> but to me but to me the reason we do this is because um I find it while there are VCs that you know do the competitive deal flow thing they make a lot of money that way. I think my team's biggest contributions is not you know fighting over hot deals. It is finding ideas and Creating companies that would not exist but for the fact that we and a founder got together to co-found it together. So I think we just create
more value in the world by creating new companies rather than only discovering hot companies to try to, you know, put money into. >> What concerns you most today, Andrew? I love your optimism and your open-mindedness. What concerns you on the flip side? the difficulty of bringing everyone along with us. In Previous ways of economic disruption, like when when when our nations went from mainly agriculture to non-aggriculture, someone that was a farmer could keep farming until they retired, but the kids had to learn a different trade, maybe move to city or whatever. The change is so
fast this time around that we need people that are alive today to learn new skills as opposed to we need their kids to learn new skills. And that's actually Very challenging. And historically, I don't think we've ever been good at that. >> You do a lot of interviews, Andrew. You speak to many journalists. I'm not a journalist. Never actually had a job. Do you find the quality of interviewers that ask you questions good? >> I think media has an important role to play to curate and disseminate knowledge. I think the quality of questions that reporters
are asking has Been very clearly trending up over time. uh but there is still the hype element of it that keeps on distorting the information ecosystem. Unfortunately, there are financial incentives and you know regulatory capture legislative benefit types of incentives and certain types of hope. Uh it's a certain types of hype and that's actually one pattern that I've seen. I won't name any companies but I find that um the companies with something to lose whose Statements over time have become more moderated. So, you know, I I find that as you're established company, you just say
more sensible things, but there are some companies I think are at greater existential risk. Um, and I find those companies, some of those companies that I don't want to name, to be the worst sources of hype because they've got less to lose. Let's just say a bunch of random stuff in many respects. It's it's a lashing out in desperation. I think When you look at a Demis say obviously a brilliant leader or you look at a Sam even or a Dario you all of them I think have moderated their position significantly with the maturing of
their companies. >> Yeah. Yeah. No no no comment on individuals but but I think that when you have something to lose you say more sensible things but when your company faces greater existential risk you know you sometimes people say weird things For for fundraising. I like to finish on a tone of optimism. What single thing are you most excited for when you look forward to the next decade? So for me, for example, you know, my mother's got MS. I think we'll have incredible medical discoveries in some diseases that we haven't really made much advancements in
for years. That excites me. What excites you? >> Yeah, I'm sorry to hear about you, mother. Um, what what excites me, I I Want to empower everyone to build AI. uh I think the distance between you having an idea and building it is now much shorter and we need not just software engineers to be creating. So in the future I hope that a lot of people um instead of saying is there an app for that they'll say I built an app for that and instead of just being a software user they'll be a software creator
and when you get there then people all around the world will be much more Empowered get more done have more fun. >> Andrew this has been such a joy to do. Thank you so much for putting up with my prying and my pressing. You've been amazing and I really appreciate the time. Now, I really enjoy your show. So, it's a privilege to be here, Harry.