One of the most important skills for the future is the ability to tell a computer exactly what you want so they can do it for you. And [clears throat] for the foreseeable future, people that know the language of computers, be able to understand coding, will to do that much more effectively than people that don't. So I know that even you know earlier this year there were some leaders that were advising others not to learn to code on the grounds AI will automate it.
I think we look back on that as some of the worst career advice ever given. >> Hi there listeners. Today we are sharing another conversation from this year's Masters of Scale Summit.
[music] Andrew Ang is a true AI pioneer. He's co-founder of Corsera and Deep Learning AI and managing partner at AI Fund, [music] a studio that incubates new AI companies. He was joined on stage by DJ Patel, former chief data scientist under the Obama administration and general partner at Great Point Ventures.
It's a dynamic discussion about the current state of AI, how future generations should approach the technology, and how America can stay competitive in the global AI race. I can't wait to share it with you. Let's jump in.
[music] I've known Andrew for a long time. He's been talking and working on AI long before it was cool. I remember actually sitting down with an Andrew when we were coming up with the these ideas around data science and these things and he was talking about AI was like, "Hey, we're still in the middle of winter for AI.
" But he's he's he's been on this for the incredible journey. But to just give you some of the highlights uh of of Andrew, he was one of the first people to advocate in using DPUs for deep learning. He wrote >> he should should have bought Nvidia stock.
>> I was going to say Jensen did how many jets does Jensen owe you? >> I don't know. Like I mean it's like what size, right?
>> Happy for the guys. >> Exactly. Uh did he ever give you anything?
>> I don't think he gave me a few GPUs. That was nice. >> A few GPUs.
You got some [laughter] GPUs out of it. All right. Um you wrote the uh the first major online course on machine learning and AI which led then to Corsera.
At the time it was radical thinking to actually teach an online course and has helped over 10 million students. Is that correct? 10 million.
>> Yeah. Yeah. Thank you.
Yeah. >> Incredible accomplishment. Along with Jeff Dean, Greg Curado and Rajat Manga, you started uh the Google uh deep mind not the deep >> Google brain.
>> Google brain. Thank you. The Google brain project.
>> Much much more. uh but some of the things that you're doing right now you have a fund uh and an and uh a studio investing building with uh on AI and you also have some of the most seinals cited papers in AI today. >> I see I have to admit I probably have highly cited papers I I don't track my citation count as often as I used to.
>> Exactly. You leave that for everybody else or the AI systems but we're also something that you don't know about. Yeah.
How many people have heard about Agentic systems these days? Yeah, maybe I should ask it the other way around just to like highlight people. Guess where Agentic came from?
This is a guy. It's a little known fact that uh Andrew was actually the guy who really came up with with Agentic. Actually, let's start with that.
What's the story behind Agentic? So like um almost two years ago I saw this rising trend in AI that a lot of people were excited about but within the tech community there was all this debate where some people would write software and say it's an agent. Others say no that's not an agent it's an agent not an agent.
I thought this is a waste of time. Why don't we instead of having a binary agent not agent let's just call it all agentic and then stop arguing and get on with the work. And so I actually kind of ran a campaign that I didn't publicize, but I did it anyway to try to get more people to just adopt the word agent.
What I didn't realize was a few months later a bunch of marketers would get a hold of this word and slap it as a sticker on everything in site and that helped the movement take off. But even though the hype's gone like that, I think the real value is really growing rapidly too. So that's been exciting.
>> Well, let's let's stay on that. So so I want to do this in four stages. uh let's first talk about today and and talking about today and the state of AI through the lens of one of the OGs.
What is your honest take of what AI can and can't do, especially through this lens of Agentic? Because I think we're all struggling with all the marketing and buzz out there of what's real and what's not. A >> lot of work that lies ahead is to take this amazing aentic AI capabilities and map it to real business workflows.
I think we've been doing that. what is an agentic >> like let's ground us there. >> So a lot of us use AI uh large language models by prompting it and asking it to write an output.
That's a bit like going to a human or in this case an AI and saying please write an essay by just typing it out from the first word to the last word all in one go without stopping the thinker without ever using backspace. So humans don't do our best writing like that and neither does AI. With a gentic workflow, the idea is we can ask an AI to take a more iterative approach and say first write an outline, then do some white research, then write a first draft and critique it.
And so the iterative workflow takes much longer, but for a lot of tasks from medical advice, legal advice, tariff compliance or writing code for a lot of different things, this agentic workflows work much better. But there's still a lot of work ahead of us. I know that some people say, "Oh, don't worry about it.
Wait for AGI. That'll solve all the problems. " I'm not a fan of this.
let's wait for AGI. You know, that feels hypy to me. A lot of the work that's very valuable to doing today is to take the technology and what may be possible in the next six to 12 months and just go do valuable stuff with it.
Sometimes the analogy I think of is I'm using these systems because I'm trying to build uh you know, I'm trying to deploy AI to really help senior citizens in their healthcare journey, different things. And sometimes I think am I on thick ice? Am I on thin ice with what the systems can and can't do?
I suspect a lot of people out there are wondering, does it work for this problem or is it fragile? Sometimes we just get to an 80% solution. Other times it knocks out of the product.
Sometimes we're incredibly disappointed. How do you as Andrew who's building companies advising so many people think about this? >> Yeah, it is tough.
I think um the closer a task is to only text processing and if you have the plumbing to get all the information hopefully in text that people need to do a task the easier it is for AI to do it. If you need to feed in images, voice conversations, it's not impossible but it gets harder. And then one question I often ask is um humans know a lot of stuff.
We just have a lot of context and so do we have the data plumbing to get the AI system similar context as a human would needed to do that task. And then I think um for a lot of multi-step processes if you can write a standard operating procedure like an SOP that also may be assigned it's worth seeing if you can codify the SOP in a multi-step agentic workflow. So it's hard to determine what can and cannot be done but I think these are maybe some suggestions for rating what's more or less likely to succeed.
Well, let's switch gears and go to education uh and how we're because you've changed you really changed and transformed academia through Corsera. Your current online courses have I I can't even keep track of how many views and how many people are taking your courses. I'm sure every parent is asking you what their kid should do to be prepared for AI.
Should kids still learn to code? Is CS a thing? Is data science a thing?
or is that a bad idea? >> Yeah. So, um, one of the most important skills for the future is the ability to tell a computer exactly what you want so they can do it for you.
And for the foreseeable future, people that know the language of computers, be able to understand coding will be able to do that much more effectively than people that don't. So, I know that even you know earlier this year there were some leaders that were advising others not to learn to code on the grounds AI will automate it. I think we look back on that as some of the worst career advice ever given.
I'm already seeing um on my teams, a lot of Silicon Valley teams, not just the software engineers, but the marketers, HR professionals, the analysts, the finance professionals, the ones that know how to code, they're starting to run circles around the ones that don't. So, um if your kid intends to be a software engineer, you know, have them learn to code with AI. And even if they don't, it is becoming clear that in the future, I think we need a lot more not just users of software, but creators of software.
And so rather than your kid growing up and asking is there an app for that I wanted to say I built an app for that and with AI assistance coding is much easier than it used to. So don't code by hand get AI to do it for you and people that do that will be more powerful and more effective than people that don't. So that's one challenge that the system has to go through as well.
There's this new skill just like today. I can't imagine you know if you go through college without learning how to do web search right you know that's kind of weird. It limits your job prospects.
In the future, I think if you go through college, you come out not knowing how to create software, we'll go, "Oh, that's kind of weird. It will limit the prospects of what someone can do. " >> When we think about uh um when should a kid get access to to this technology, what does that look like?
and specifically through the lens of some of the lessons I think we've really started to struggle with of around social media surgeon general previously Veric Murthy really highlighted the challenges that have happening around that when is too early in your idea when is it the appropriate time for someone to really start having access to AI to make sure they're truly AI native and get the maximum benefits of this technology >> I know I think it's difficult so I I I feel like, you know, when I was mentioning kids get access to books, we think really really young, but there also some books that clearly inappropriate for a 2-year-old. And I think one of the challenges technology is I think there are apps that are just fine for a very young child to use, but there also a lot of stuff that we would not let a young child use. So my kids four and six, um they do use uh uh tablets occasionally, but I'm there with them, right, when they're using it.
and kind of I'm not using as a babysitter but you know having them do educational things or doing weird things and we talk about it. So I think um the medium has changed from other things to tech but I think the challenge is what are the business intensives for companies to do or not create certain experience for the kids and as parents how can we um have guardrails or curate the whole four things just like I don't let my books kids read certain highly inappropriate books for their age I don't let them do certain highly inappropriate things for their age so but this is challenge given the incentives of you know certain types of companies to do things that we as parents may not want them to. >> Do you ever talk to some of those companies or groups and you're like, "Hey, knock it off.
That's not going to be helpful. " Or what's that conversation like? Because these are a lot of people who've taken your classes that are actually doing some of the behaviors that I think as parents we find really problematic.
>> You know, the the 99% of engineers and business people in Silicon Valley want to do the right thing, right? the people doing these frankly they're you know our friends maybe some people in this room right now I think everyone kind of wants to do the right thing and um I wish we could find a way when the billions of dollars at stake to still always do the right thing it it is a real problem and we do see a small number of people that will sometimes do not quite the right thing when the financial incentives or some incentives are big enough um I wish I knew how to solve the problem of human incentives I guess uh >> great uh well let's switch to something easier here. Um, US policy, you were early in advocating both the, you know, as we mentioned, the policies of GPU, but also, you know, you were one of the first people I saw that really were were working with international technology companies.
And so, given some of the place where you sit and you get to see across the landscape, you've seen we've seen this like whipsawing on GPUs from federal policy. We've seen executive orders talking about what they would describe in the their words woke systems. We've seen also executive orders and policies trying to accelerate the adoption of AI.
And so if you had five minutes with the president and you what advice would you give them to both a responsibly unleash the power of AI to benefit all Americans and b ensure national competitiveness? What would that be? Yeah, I'm really worried about US national competitiveness in AI.
I think um some things that the current administration has done well. I think the previous administration had some AI safety types of thinking that was really safety theater driven by lobbyists um fear-mongering to try to create regulatory capture anti-open source as regulations right so if you don't want to compete with open source make up a bunch of stuff about the dangers of AI to try to get stifling licensing whatever pass so I think current administration you know seems to have very low patience with that that's good um things that worry me I think that um both of us are immigrants um I I think that students have been immigrants. >> A lot of our students are immigrants.
I really worry about American competitiveness if we make it harder for high school immigration. But frankly, not just high school immigration. When I came to the United States as an undergrad student, I think I was like 17 years old.
I was frankly pretty clueless. I don't think I was high school at all when the US let me in. So letting students come to the US to then grow up and hopefully be higher skill.
Um I really worry about that. I think defunding of science uh the the in decreased investments in science and AI and other things universities do have issues right let's live candidate we we there are things we could fix in universities but I think that um diminishing the ability of this country to execute science I really worry about that um yeah and then I think um in terms of national policy I also worry about um our reliance on uh uh TSMC uh I think TSMC Arizona's chipmaker Sorry. Um, it's been interesting to see China recently ban certain imports of Nvidia chips, which is strong signal that China is moving toward independence from TSMC and Taiwan at a moment where the US is becoming still heavily reliant on Taiwan manufacturing.
One of the implications of this is if anything were to happen in Taiwan, either a natural disaster or a man-made event, um disruption to the Taiwan semiconductor ecosystem, could end up hurting the US much more than it hurts China if China becomes, you know, more independent of Taiwan manufacturing than the US. So I think semiconductors and then lastly u I think that AI semiconductor is a bottleneck. The other big bottleneck which you read in the news is totally true.
it is energy. But when you build a data center, that's a machine to turn electricity into intelligence or turn electricity into OM tokens. And so the constraints I I have so many friends are hung up in permitting.
Can we build a power plants here? And then you you think you the permits, but then there's your local objections or whatever, which are maybe valid, but I think we're that the energy capacity is the other bottleneck that I really worry about. Mhm.
>> When you think about China specifically, uh let's just take China and Europe, two very radically different approaches in how they think about AI regulation, owning the thing. It's, you know, and then you sort of watch the US trying to navigate this for competitiveness. One of the things that I'm I'm curious about from your your lens is this idea of both the open-source models and do people build off of the main trunk of AI which is US-based primarily right now but increasingly new models out of China don't see it a little bit out of Europe.
What's the right strategy here? What's a strategy that you believe is best and optimal for the world for AI? Is it kind of like a central trunk of AI with branches or is it many different trees in the forest with people with a federation of different models, approaches, techniques?
>> I think we need multiple branches because otherwise um one of the reasons why make an analogy mobile phones the mobile ecosystem is kind of uninteresting is because there are two gatekeepers Android and iOS and unless they let you do certain things, you're just not allowed to experiment. So I hope the AI will not end up with a small number of gatekeepers that can limit innovation. What has happened is over the past year two years China has really pulled ahead of the US in releasing open weight models.
These are models anyone in the world can download and use for free. And I think I saw a stat showing that cumulative adoption of Chinese openweight models I think is about to surpass or may already have surpassed cumulative adoption of US open weight models. the US closed models are still better but open weight models are a key part of the AI supply chain and people are using them.
Um I think there's a problem that we aren't investing enough as a nation in that. And then you asked about Europe honestly I I think um love Europe I wish Europe would wake up and get going faster. um for a while over the last few years visiting China European regulators I heard things like we want to be leaders in regulating AI [laughter] you know I don't think that's how you gain competitive advantage um >> more breaks more breaks less gas win the race let's turn I I want to turn to the future in this remaining couple minutes and really talk about the future of AI but I want to talk about it through the cuttingedge next generation of students that you see the entrepreneurs, what what are the problems that you see them gravitating to?
What are the things the hopes, the dreams? And specifically, what does that tell you about how the next 24 months look? >> We're in Silicon Valley where most of us love AI.
I love AI. Love what I do. I think make the world better.
I think many of us may underestimate the distrust that a lot of people across the nation have for AI and I feel urgency to kind of get our act together um to make sure that we can tell a compelling narrative to explain why AI is actually really good for the world. Um, it turns out we all get excited about productivity improvements, but when a contact center worker is scared of losing their job or a fast food worker, hears a politician say, "Yep, guess what? These AI people, they're going to make your job go away.
" That creates a lot of fear and distrust of AI that we don't really see here in Silicon Valley. Um, so I think to win people over, we need to make sure technology genuinely benefits everyone very large. And I think there is a path to that AI can make individuals much more effective and much more productive.
Um but to get the tools available by everyone, teach everyone to use it, does the upskilling, improve the tools and really you know I I I feel like with the concept of a 10x engineer, I think with AI we can have 10x marketers, 10x analysts, 10x finance professionals, but to actually make that happen, it feels like there's a lot of work ahead of us and I worry that um we have not yet won the trust of a lot of people in this country. >> What's your favorite way that you use AI today? Gosh.
Uh maybe I'll share one that is not widely known. I use >> where everyone's like go on. >> No.
Yeah. I [laughter] I I use AI as a brainstorming companion much more than even my friends know. Uh and the and and and and the trick is it turns out >> you use one model.
Do you use multiple models? >> Multiple models. >> Asking for a friend.
>> I see. Yeah. Yeah.
My I use actually I love for coding I love cloud code and open increasingly using um OpenAI uh codecs as well. of a brainstorming I use multiple models. It it turns out the trick is AI is very smart but getting context in is difficult.
And so when brainstorming I find that a lot of it is not let me say some stuff then give me ideas is making sure you have an extended conversation but a conversation is give me three ideas or give feedback. Uh either one when I'm driving voice uh and when I'm sitting I I find that when I'm driving I um talk to AI quite a lot and then I'll say summarize it for me you know and I'll send it to my team and just get work done when I'm driving. >> In the final 10 seconds what's a problem that you would wish people would focus on more using AI?
>> Actually go and build stuff. I think every one of you this is a wonderful time to build. So if there's one thing you take away from what I believe in just go and build stuff.
There's so much cool stuff you can now build that just was not possible before. So build build build. >> I think that is a perfect way to end on build build.
Andrew Wing, ladies and gentlemen, thank [applause] you for your work. Andrew, thank you for your research. Thank you for being here.
>> Thank you. >> Andrew and DJ's conversation shows how AI is an ecosystem. For the US to win the AI race, it will take a multiaceted approach.
It's chips and AI infrastructure, yes, but it's also talent. We must continue to attract top global talent and offer opportunities to scholars and innovators from all around the world to build a successful life and career here. [music] You can find the full video of this and more from the summit stage at the Masters of Scale YouTube [music] channel.