e e hello everyone my name is Pat y pradit I'm korg's Chief academic officer and I want to welcome you to our webinar today exploring the ethics of AI without further Ado I'd like our panelist to introduce themselves uh panelists please share your name where do you work what do you do as well as your favorite example of artificial intelligence let's go ahead and start with Amanda hi I'm Amanda escal I'm a research scientist on the policy team openi um my background is in ethics but I currently work on the evaluation of AI systems so
a cool example of AI I think as someone who spends a lot of time writing I'm quite excited by like language models um that can generate and summarize text oh yeah I'm I'm sure the students watching right now wish they had a language model at their disposal right now um and actually they just might we'll talk about that later uh let's go to Deb Deb could you please introduce yourself hi uh I'm Deb rajie um I'm currently a Milla fellow I'm also a fellow at the algorithmic Justice League um and I do a lot of
work um auditing deployed algorithms um usually government deployments of um different algorithms or various machine learning products um my I guess favorite uh application of AI um I'm really excited by a lot of machine learning for healthcare applications um I have a least favorite application which is facial recognition which is which has been the subject for a lot of our audit work so far yes and we'll be definitely talking about that later thanks Deb uh meron could you introduce yourself sure my name is Maron sahami I'm a professor at Stanford University and a lot of
my time is spent thinking about education around Ai and other topics in computer science and I'd say having two kids that are getting near driving age one of my favorite applications of AI is self-driving cars they may be part of the first generation that actually never end up having to drive themselves wow and last but not least Natasha hi everyone I'm Natasha krampton and I'm Microsoft's Chief responsible AI officer at Microsoft I put our AI principles to work across the company uh by making sure that we build and uh and make available our Technologies consistent
with those principles so on a day-to-day basis I do things like helping to write policies helping teams work through challenges that don't have obvious answers uh and I also do some work on law reform so helping to um uh contribute to the conversation that needs to be had about what you norms and laws and standards do we need in this space my favorite application of AI is um through an organization called wild me that we've partnered with so as as you probably know the um possible Extinction of animals is a real thing and in fact
if we don't take action by 201100 we could be facing extinction of about 38% of of the world's species so what this application does is it combines some AI technology that helps identify um uh Wildlife species and images and combines the power of that with citizen scientists like you or me um to help track uh wildlife and and make sure that we're um identifying you know activities that would be uh you know inconsistent with their their long-term survival so I think that's a pretty cool um application of AI that makes a real difference in the
real world thank you Natasha Natasha mean Deb and Amanda thank you so much for joining us today uh I actually want to start off with some questions for the audience and I want to see how well they can identify these famous movie AI guys so let's start off with the first poll and audience get your fingers ready because you're going to be answering some questions so who is this famous AI I'll give you five seconds to answer it oh too easy right yep too easy easy you are correct it is C3PO from Star Wars let's
go to the second question who is this famous Ai and uh the majority Has It Again The Matrix this third question is a little bit harder so let's try one more time identify this famous AI I I knew people would have a harder time with this and actually if I looked at this I'd I would have thought oh it's The Nest learning thermostat um but actually it is how 9000 from 2001 A Space Odyssey uh good job audience um now I uh want to start off with uh a uh very important question for our audience
um audience uh we're probably we probably have a mix of students teachers and adults but pretend that I'm a uh 13-year-old um and I asked you the question what is AI how would you describe AI how would you define AI for a 13-year-old and anyone can take this well maybe I'll jump in um it's good that we saw a few examples of AI in the movies because I think you know one way to think about AI is trying to get computers in the real world to behave more like computers in the movie or you know
if we want to be more specific it's kind of thinking about what are the things that involve activities that require your thought to do what we want to try to do with AI is to have computers be able to do more of those kinds of things so when we think about driving or making decisions or playing games those are all things that require some level of human thought and so if we can get a computer to do some of those things we might consider that Ai No okay thanks meon I'll jump in and say that
one of the interesting sort of narratives that we see in science fiction is this kind of idea of you know humans versus machines and there's going to be these superum robots that are going to take over our lives and I wanted to jumped and say that that is not what today's AI is capable of and in fact really the the AI is the collection of different techniques and and approaches to um you know empowering machines to do the sorts of things that humans do but one of the sort of prominent approaches today that is really
uh gaining steam has been used in the real world is something called um machine learning and the way I like to think about machine learning is to sort of contrast it with uh more traditional approaches to software development so more traditional software development it's sort of like writing a recipe you need to know what the steps are uh you need to know the ingredients you write down the steps and it leads to a certain outcome the thing about machine learning is that it's more like learning from experience so in the same way that you might
tell a toddler again and again do not touch that hot stove uh they will probably do that and they will learn from experience that that stove is hot and then they might be able to extrapolate from that experience to figure out that touching a a toaster is probably going to be hot uh as well and might burn them uh and so this ma machine learning process where we're using lots of data to teach software how to learn from that data and find patterns is is a really exciting development and that's really um you know a
lot of what today's AI is about I I want to uh uh highlight what you said uh that there is a difference between Ai and fake AI or like just the like just general movie AI and what we see these days was is mostly machine learning so kids keep keep that in mind uh Amanda Deb um you know we see a lot of uh AI in the movies as as we saw from the polls which are real what aspects are real and what aspects are fake um yeah I can maybe start um also I um
you know I definitely um interact more I think that there's definitely a branch of researchers that um uh where their their the their primary concern or their primary work is more speculative work so there are people thinking about um you know the AI systems and the movies and trying to understand and replicate those that level of human thinking with machines um a lot of the work that I do is looking at products that we have today so like Natasha mentioned a lot of those products that are deployed affecting people's lives today are machine learning products
um and they have very specific characteristics that are very different from the sci-fi movie versions of AI um the way I like to Des describe it as kind of if anyone's ever seen a Roomba um you know that's the most widely disseminated robot uh uh in the US right now and uh it definitely does not look like like um you know the robot in the movies it's much simpler um but you know if there was a systematic sort of issue with the Roomba if there was some kind of mistake that was made in creating a
Roomba that would affect a lot of households it would affect you know millions of people so it's really important even if it's a really simple it's if it's simpler than you know the the the image that we see uh in the movies we still have to pay attention to to the ways that we build these things so for me um you know I have a 12-year-old sister and she doesn't understand what I do so I try to explain it to her and the way that I um describe the difference between the models that I'm doing
my audit work on and um you know what she's seeing on in movies is that um for one thing a lot of machine learning models even though it's described as learning um really often don't have this continuous um learning in the way that you know a human will you know learn um you know the alphabet and then you know maybe learn to write words and just continuously be taking in feedback a lot of these um machine learning models as we call them um like Natasha mentioned they'll be defined by um you know uh information and
they will be initially sort of uh uh set up using information that's been provided but ultimately um once they're defined then they don't actually keep adapting and keep evolving in the way that we as humans keep adapting and keep evolving continuously so that's something that I think is often a little bit difficult for people to understand just because of The Branding of learning we think like oh like learning is a continuous thing but it's sort of another way to think about it is if you train a dog to do a very particular trick um you
know that dog can do that trick um and if you want the dog to do a different trick you have to train it to do a different trick so uh machine learning models work that way right now um rather than sort of the way humans work where we're constantly learning new things and being very creative and Innovative so we're not quite there yet and that's something that's really important for um especially younger people to understand thank you Deb Amanda what do you say yeah I think the problem is that AI is used in this really
General way to refer to a lot of different things so maybe I make the division as something like real AI realistic AI and then something more like implausible AI so real AI is like the stuff that we actually have just now and you know that these are the applications that people have focused on so it's more narrow it's more machine learning based it's doing things like um translation and search um and it's just the stuff that you already kind of interact with even if you don't know that you're interacting with it and then there's like
realistic applications which we don't yet have but that don't seem kind of out of the realm of uh things that AI researchers will work on and possibly solve so like we've already heard like self-driving cars that's something that we don't have yet but we can imagine having that um it also includes things like um better translators um so better machine translation maybe even things like computer assistants and teachers um or things that like um can you know summarize like papers for you uh so those are things that we don't necessarily have like in the world
but they seem pretty plausible um and I think the same is true of like just more General systems so actually having systems that can do more than one thing um so they don't have to just be like tailored to a very narrow application but it can like teach you about mathematics if you ask it questions about mathematics but it can also like summarize a book if you ask it to summarize a book um and then I think there are the ones that are like more implausible so I think a lot of like movie depictions of
AI is kind of they're both like humanlike and they're very often robots um and so it can be really easy to kind of think that what a very powerful EI would look like uh like a very general EI it must look like a lot like a human and have like person's kind of like motives and so that's why you know to make it interesting we make it really malicious for example you know and I think that you know that is like I was kind of trying to make something you know very interesting but it's not
necessarily the case that we're going to see something that is like very humanlike um even if you see something that can like learn in these really General ways that doesn't mean it's going to be like a human-like system thank you thanks Amanda uh I have a a question on behalf of again the kids um on our uh webinar today uh students out there uh I bet you're wondering why should I care about AI I mean other than it being cool in the movies and uh and all that and it's some type of technical Trend right
now like maybe you're wondering why should I care about AI why do I really need to know about it how might it affect my life let's start off with meron what do you think you you you have like you said you have uh two um kids who are about to learn how to drive I believe or learning how to drive about yeah so other than other than the fact that they might not have to drive and that's something that they might look forward to or not why should kids in general care about AI well I
think if you spend any time online these days which is basically all of us you're interacting with AI in a bunch of ways um many of which may not be clear that what's actually power powering something underneath the hood is actually something we would consider AI um but some simple examples for you know for examp example or you know if you watch videos if you watch YouTube if you watch Netflix how do they make recommendations about other things you might like the you know technology the powers that we would consider that a form of AI
if you play video games often times the characters you're interacting with are trying to take some novel action depending on what you're doing in the game so we would think of that as a form of AI if you send email your email is being filtered for things like spam using AI if you're on a social network things like friend recommendations are powered by AI so AI is out there in a bunch of different ways whether or not you're really actively thinking about it or not it's actually interacting with you in a lot of different ways
and so the more you know about it the more you can make inform decisions in those interactions thanks Mayon and anyone else would like to to add to why a uh a student these days would should care about AI or care about learning about AI I'd add that uh you know at times uh s Nala our CEO talks about AI has been you know one of the most transformative Technologies of our time so akin to other um advances and Technology things like the printing press or electricity or the internet and with all of those new
advances have been new issues that have been have resulted as well so I think my first reason would be this is gamechanging and you want to be a part of a of a gamechanging change in technology and to understand its impact on society so you can be involved right um as Miran was just saying you're going to be consuming AI powered Services you're going to be impacted by AI powered services and so I think it's fantastic to really want to understand those uh those Services you know AI is not just uh sort of magically created
it just doesn't turn up it's made by human beings and there are lots and lots of decisions that are made along the way so I think you know if you come to understand how the technology works and how and its impacts on people and Society you can have a voice in in in how technology serves Society going forward and so I think that's a really exciting reason to be involved thanks Natasha yeah I definitely um agree with everything that's been said I guess I'll add that um you know there are um algorithms in you know
programs that we choose to engage with such as recommendation systems uh you know when we watch Netflix or when we're um looking through a social media feed um but there's also a lot of algorithms being used um you know by government agencies being used by your schools maybe um uh that you know you might want to learn more about uh algorithms that you might not necessarily be fully aware of unless you educate yourself about um you know how they show up in your life to affect your life um and the life of others in your
community and um I think that's incredibly incredibly important if you're hoping to be just someone that understands more of how decisions are made about you and about important things in your life um so I I definitely think that you know for anyone that's curious just to understand better the systems that um affect them uh it's super important or super relevant to to care about Ai and to care about these algorithms and understand how they work gotcha thank you yeah I think another issue is that maybe people can it's easy to think that like AI is
only going to be related to things that are very like math and sciency I think that's like a kind of bias we might have but actually if you look at a lot of like AI applications now it's being applied to like art to music to text to just like things that you wouldn't consider like very traditional kind of like Science and Mathematics domains so it's really easy to think well it won't affect me unless I want to have like I want to go into the Sciences but it might affect you if you actually want to
go into like law or to art or to writing um and so you know historically there were times when people were like well why should I learn computers like I'm not going into science like so I don't need to learn how to use a computer and now we would look back on that and say that was really naive like computers were actually going to affect a huge number of Industries and I think we should feel kind of similarly about AI just now it's not just something that's going to afflect a kind of like narrow domain
of occupations um rather it might end up affecting kind of like a whole host of things that you might want to do yeah thanks Amanda uh Natasha you mentioned um an organization called was it wild team is that what what the name of it uh it's called wild me wild me wild me and uh and they were using AI to address um animal Extinction um uh for all the panelists um you know how can AI be used uh for social social justice or social good you know uh just as a little plug uh code.org AI
for oceans tutorial um follows our theme for the hour of code this year I think it was the hour code theme last year as well CS for good um and what we're talking about right now is you know AI for good AI for social social justice but what are those examples of how AI can be used or is already being used for for social justice and social good yeah an example I really like um is a project from Google Ghana where they um they have an issue in there's an issue that a lot of the
farmers in that country face where they try to understand um you know how crop diseases show up in their plants and it's very difficult to just you know by by site um understand what kind of disease might be affecting different types of crops um so there was a project that they did where they collected a data set of you know affected cassaba plants and plants that were not affected and um it was a really interesting project because it was so connected to um a real issue in that community and um computer vision in this case
which is sort of the ability of the machine learning model to be able to distinguish between different images um was super helpful in uh figuring out which plants were sick and which plants um and thus needed more care or which plants were okay um and it was it was sort of a great example of you know collecting data sets that are useful for the community um to address a problem that that Community was really facing so that's a really uh that's the example I often think of when I think of AI for social good thank
you other examples out there we got addressing animal extension crops and maybe another one to throw in is um looking at how to help different countries develop economically so there is you know some work the United Nations does for example to try to figure out uh where economic development is happening and so for example where do you need to send food shipments where do you need to send supplies what kind of Aid to send and that's a hard thing if you're trying to monitor that on a global level but it turns out if you get
some satellite imagery from that imagery you can do the kind of vision analysis work that the Deb was talking about to be able to understand for example where there's electricity usage because you actually see lights where you have different kinds of agricultural development because you can identify what parts of the satellite image or crops and how much they are where there's water and how it's being used and that gives you indications as to what where different areas are developing and how quickly because you can look at these satellite images over time and then help figure
out where AID needs to be sent in the future awesome I think there are also examples that um might feel slightly indirect but seem pretty important so like recently you know uh Deep Mind released Alpha fold which is helping us kind of uh see the shape of proteins um and things like that have like down thee line applications so these like science and medicine um applications of ml uh which could mean things like assisting with new drug Discovery um and that's a way of doing like a lot of social good kind of but you're doing
so early in the process um I think another one that's kind of we might not think of is like machine translation um these things that we've had around for a while but as they get better they just like let more people like access um things like just like documents on the Internet or be able to talk with one another um and that seems like a really important thing for social good to me yeah so you know uh students and teachers out there um obviously there are examples um of AI and machine learning being used for
social good but as you know uh and from the news as well sometimes there are issues in the way AI is used and some unintended consequences um panelists uh what are some of the uh potential misuses of AI or another way to say it is you know everyone's trying to use AI uh for a positive outcome but sometimes are unintended negative outcomes what are those outcomes what are we what are we seeing already and what's what's what scary things are even possible if they're not happening already yeah so like Natasha mentioned um you know one
of the big differentiators between um machine learning or Ai and traditional software is that traditional software there was a lot of control of the software engineer to be able to define the rules um to make or to automate a decision so um as a result of that um there was a certain amount of visibility and control around um you know what are the steps involved or what are the what are the steps of the recipe to get to the cake effectively whereas with um machine learning a lot of those steps are defined by data and
that causes you know that raises a bunch of different issues for one um a lot of the models that we use today are quite large and resource intensive so they require a lot of data um and sometimes it's data that we don't understand that we don't necessarily think um or we don't necessarily give permission to be used as part of a machine learning model so there's a lot of concerns there um and there's also the fact that just because sometimes the data sets are so large um it becomes very difficult to understand um what part
of the data is defining the program um and the decisions that the program is going to make um and as a result of that there's a lot of challenges with trying to understand the steps that end up coming out of the model and the recipe um that that ends up uh sort of being developed using that data um and as a result of that we have a lot of difficulty um and this is a lot of the work that I do we have a lot of difficulty even properly evaluating or understanding um you know what
it means for the system to work or not work just because some of those um some of those consequences of deploying that model uh could be something that we can only observe you know Days Later years later wow yep to add to what Deb said you know in addition to thinking about um the the data and the and the challenges the data bias challenges the the challenges that can come when you know algorithms are not trained in a really thoughtful way there are some challenges that come when uh people are not thoughtful about the use
cases to which they put uh the AI Technologies because some of um the Technologies if we take facial recognition as an example it can be put to really wildly different uses right you might have unlocked your device this morning by using facial recognition um and that's a pretty constrained use case um the consequences of something going wrong um if if the phone or your device can't recognize you and not all that great you might be put to a bit of inconvenience because you might have to enter your pen or you might have to call the
building security because you can't get into the building but that's a fundamentally different thing to using uh facial recognition for Mass surveillance of uh people at a protest or to to persecute a marginalized group so there are essentially a lot of choices that you have to make when you're building an AI system and there are a lot of choices that you have to make when you're deploying an AI system and if you don't think through both of those things very carefully and think very broadly about the stakeholders your um you're impacting if you don't think
about the Readiness of the technology things can go wrong at both of those end so you really have to bake in safeguards from the very beginning and that then allows you to realize the potential of the technology got it anyone else want to add something meon I was going to say I think in a lot of ways AI amplifies certain aspects of society and some of those aspects can be good when we try to think about you know addressing problems of social good but there's also aspects of society that we don't want to have Amplified
that you know if have bias in the world for example that's not something we want to amplify it's also the fact that it makes some things easier and that can be good or bad it could be you know it's great when it helps you around the house because now you might have a personal vacuum like a Roomba that helps clean up but at that same time if you think about autonomous technology you can also mount a weapon on that and you can engage in Warfare and at one level we might say that's good because then
we don't have people dying on the battlefield and another level if it makes it easier to engage in Warfare that might not be something we want and so you know as Natasha is saying we need to be very judicious not only about what AI we can build but you know what are the things that we actually want to amplify in our society to think about what are the right problems for AI to address yeah you know I W to um Amanda this question is for you is starting with you but then for everyone else as
well let's let's let's talk about the ethics right uh and I know Amanda a lot of your research is into the ethics of using AI uh what are the the principles or ethics that should govern how people develop or use AI yeah I've I've thought about this a fair bit I think the first example that I'd give of a principle that feels important but is maybe kind of an unusual one that people don't talk about here is uh patience so I think in part one of the issues is we're seeing like a lot of new
technology where we're trying to predict the consequences and I think there's a temptation to try to develop and deploy things really quickly um and in reality what you want to do is take the time that is required to slowly roll out things see if they have any consequences that you didn't expect make sure that they're like rolled out in a way that's like fairly restrictive at first so that um as other as others have said H you know the consequences of something going wrong aren't too high and then if that's okay you like you do
that again and you and rather than having some kind of like sweeping change you just have like a kind of slow roll out where where you can kind of like control and understand the consequences of your system and that requires like an environment where we're all willing to be a little bit more patient about how things are ruled out and how they're developed and where we kind of create an environment where developers aren't just incentivized to like put something out to Market straight away in this like really like large and kind of uncontrolled way so
I think that like yeah maybe an underrecognized principle here is just a principle of patience all right so uh again I know there are a lot of students on this webinar write down patience as one of those those ethical principles for developing or using AI patience all right Deb meron Natasha let let's add to that list yeah I was gonna say I really love that I think that's excellent um I I also find that um a lot of the um like Natasha mentioned there does need to be a little bit of like caution in terms
of before you even build something you need to reflect there's it can't always be a reactive situation um you have to sort of be very you have to take initiative uh to think about what could go wrong before you build the thing and I think that's very connected to patience that idea of caution and being careful um um but I I also just really like that idea of patience as well because I find that um uh like a like you were just mentioning um you know a lot of the challenges we have with machine learning
deployments is when people kind of rush it so they they don't do a good job collecting data in a way that's respectful or they don't take the time to properly def you know label the data or properly um you know scope out the context that it can be deployed within or communicate that document everything so um yeah I I actually agree with Amanda and I'm just gonna just use another synonym for patience which is caution thanks Dad I'll uh jump in with with two um fairness and transparency so we really want our AI systems that
we build to be fair and to be fair in a couple of Dimensions right we don't want a situation where a system works well for one group of people and works really badly for another group of people we also don't want a system that hands out resources or opportunities to certain people more than others so on that type of system you can think about you know a hiring system or a loan application system we don't want certain people getting all the resources and opportunities and others consistently missing out and we also don't want to do
um that thing that Miran was talking about earlier which is we want to try and minimize the possibility that we would be perpetuating stereotypes and that's that is a really important objective and it's also a really hard uh practical problem that many people are working on today so that's a some ideas about fairness on transparency because these systems are really affecting our lives in many ways big and small it's really important that we understand as stakeholders for those systems why systems are behaving in the way that they do um because if I am being impacted
by an AI system that's um uh helping to uh with a hiring process and I miss out on making the candidate list I want to know why and so there's you know a a sort of understanding that we have to communicate uh through AI systems so that they're not just black boxes and nobody knows how they operate so I'd say that that transparency is a really important principle too thank you and just to add to that I would say that sometimes in technology we think about you know what we can build and an important equally
important or perhaps more important question to ask is who are we building for um and part of that you know even though I'm excited about self-driving cars is understanding who's actually imp impacted by technology right there's a lot of people who will be positively impacted by self-driving cars because they may be safer people who may not necessarily otherwise have Mobility options now have easier ways of getting around but at the same time there's a lot of people whose livelihood is made through Transportation people like taxi drivers or truck drivers and if they're potentially displaced in
their jobs because of AI we need to think through what the full set of impacts are and there is a place where you know having some deliberation and some early forethought about well how do we help out the people who might be adversely affected by AI helps us reach better outcomes for everyone in the end H speaking of of future the future in jobs um a lot of our audience will be starting their career in probably 5 to 10 years um what kind of jobs do you all think will be available and won't be available
anymore so let's let's help these these young people out uh what kind of jobs will be available and won't be available anymore what do you think that's a tough question I don't want to be on record saying anything um I was I was going to say I I do think um something I wanted to point out was the point Amanda had made earlier around um you know who can be in the AI space I think very um earlier there was an assumption that you had to be a computer scientist or you had to be a
mathematician to have something interesting and important to say about Ai and how it's developed and how it's built and how it's managed um and uh and how it's deployed and I I think now we're realizing that we need the help of all these different types of people with different interests and different skills so um I think that there's going to definitely be this interesting new opportunity for you know um anthropologists and social scientists and um you know people with all kinds of different skills and um and uh expertise uh to be able to participate in
this field so even if you're someone that you know you think AI is really cool you think machines are really cool but you care about how they impact society and you want to study that or you think that that's a more interesting question um I think in the future there will definitely be more opportunity for for that kind of uh that kind of job or that kind of role yeah ethicists philosophers they're all needed I I completely agree with that it's actually one of the parts of my job that I love most which is getting
to work with people from all different works of life from all sorts of different disciplines they make me think about things that I thought about before and in different dimensions and I think that's that's really exciting um certainly you might uh end up with a job like mine that didn't exist even a couple of years ago so I think there is going to be you know a whole new um sort of discipline that um evolves around the practice of of responsible AI you know how what are the sorts of things that you need to think
about when you are baking responsible AI considerations into the way that you're Building Systems how do you move from you know a principle to practice so I think there's going to be a whole new discipline of of people working on the handson ways in which we can um help make Positive Choices when we are building these systems and there's going to be a whole discipline of people who who look at these systems from the outside in and provide a perspective about how they doing the things that we want them to do or that we expect
them to do and kind of auditing systems and and providing that outside in perspective and Deb's done some of this really cutting edge work already but I can really imagine a whole new um profession arising out of that type of work got it yeah I think one of the reasons it's hard to give a very definitive answer here is that when we think of automation I think a lot of people think of the automation of like low skilled work um and they think of like robotics but one of the interesting things about AI is that
you actually see like when AI can do tasks it's often like high skill tasks or knowledge work um and so I think that that should just make us aware of the fact that this is something that's going to affect a whole host of professions and so it's actually probably better to just not have this sense that it's like well here are the jobs that are completely safe and here are the jobs that are like um that are at risk um obviously we should think about that but you shouldn't kind of like I think that it's
better for us to think actually assume that this is going to kind of affect everyone how are we going to deal with that in a way that make sure that no one is left out thanks Amanda yeah I think Amanda's point is super important you know part of the reason why people learn about things like science and physics is not necessarily because we need a bunch more people studying black holes but it's because when you understand something about that it allows you to make better decisions about a lot of other things that that impact your
life and you know in terms of future jobs there'll be a lot more people who will be not necessarily building AI but will be thinking about how they can use the kinds of AI that exists in terms of enhancing the jobs that they do and so to that extent for someone to learn a little bit about Ai No matter what they do will be important because it'll probably be harnessed in their job in some way in the future probably in a lot of ways we can't even see right now but understanding something about the technology
helps you make better choices about how to use it and deploy it yeah thank you um audience uh raise your hand I I think there's probably some button where you can raise your hand um raise your hand if you want to learn more about Ai and panelist can you see the the number of hands being raised awesome okay so let's let's help them out uh if I'm a kid who wants to learn about AI where would I start uh I really like to find a project that I think is interesting um so I recommend looking
at things like kaggle competitions they just have like data sets and problems and I think it's good to acquire skills by like finding a project you care about and then acquiring the skills in order to like solve that project um and there's lots of like online resources like Fast a that will help you do that so yeah I recommend finding a problem you want to solve and then like finding the tools that help you solve it thank you yeah I was going to recommend men like online courses like Fast AI um also a lot of
corsera offerings just make it very accessible to just start hearing the terminology and learning what the words mean and navigating your way through I think um this field is really interesting in the sense that a a lot of the like foundational knowledge is um not completely inaccessible it's something that people have worked really hard to make um tangible in certain ways so there's a lot of resources even for um younger kids to explain some of these Concepts at a very basic level so I think that it's a great place to just start um and once
you're getting familiar with the concepts and you have an intuition as to you know some of these questions of what is machine learning for example what is data how does how those things are connected you know when you start um understanding these basic concepts then um once you're ready to you know to start learning the math or start coding um you'll feel much more confident thanks B I I would uh endorse both of those uh bits of guidance I would also say sometimes you might find some interesting challenges and places where you are already so
for example um I know that with our Minecraft education offering we have um some challenges within uh Minecraft that help you understand AI concept through trying to solve for real problems like climate change so uh always keep your eyes I would say another way to try and um sort of develop your thinking about how uh these sorts of ethical challenges could materialize is is to pay attention to the media and when you see um you know the increasing reporting that there is today of um of situations where um you know ai's potential has been realized
equally situations where things have gone wrong you can ask yourself questions about who might be impacted how could that have panned out differently does this remind us of a challenge that we might have encountered in the past or is this a whole new challenge I think by sort of having that critical mind when you're just hearing about um events as they unfold from your from your new sources that can help start to build your your muscle and thinking about the implications of of these new te Technologies thank you Natasha any resources you'd Point our students
to Maron well there are a tremendous amount of online resources as as many other folks have alluded to on corsera Ed X if you do other searches for uh you know available AI resources at a variety of levels and I would just the one thing I would add is the the hardest part is often just starting um and sometimes when you start this was true for me it's true for a lot of students I work with um sometimes when you start it can feel a little bit over overwhelming because you feel like there's all these
terms you need to know and sometimes someone throws in some math that's complicated or whatever the case may be and the thing to keep in mind is that a lot of people actually when they're learning uh have challenges or have struggles when they begin with and the important thing is just to work through it and you'll get there and I see that in lots of students who you know if they feel that it's hard and they just keep at it and they work through it and they they do well so understanding something about what you
don't yet know when you start jumping in but if you feel like it's overwhelming don't worry many people feel that way and it's it turns out if you just work at it over time all the pieces begin to fit and make sense so I just keep at it yeah and that's exactly why code.org has created a bunch of resources around AI you can find them at code.org or just type it to a search engine code.org spai and you'll you'll find the page uh there are videos there lesson plans a lesson plan about ethics and you'll
actually find like I said videos but videos featuring our panelist as well they're they're famous folks uh Beyond just their their uh day-to-day jobs um so uh with that I want to thank our panelists for joining us today this is obviously going to be recorded so uh whoever's watching this in the future hello um uh but if you're not watching this in the future you can check out this afternoon's episode of Code bites dance party it's uh happening at 1:30 Pacific for :30 Eastern you can go to code.org codes bytes for details and again I
talked about code.org AI uh panelists audience thank you for joining us today we will see you later have a happy Computer Science Education week bye