when will we have generative AI that is as smart as people yeah I think there's at least a 10% chance of something that you know could be catastrophically dangerous within about 3 years and I think a lot of people inside of open AI also you know would talk about similar things many top AI companies including open AI Google anthropic are treating building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years and some believe could be as close as 1 to 3
years away more to the point many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction the companies in question often say that it's too early for any regulation because the science of how AI works and how to make it safe is too nent I'd like to restate that in different words they're saying we don't have good science of how these systems work or
how to tell when they'll be smarter than us we don't have good science for how to make sure they won't cause massive harm but don't worry the main factors driving our decisions are profit incentives and unrelenting Market pressure to move faster than our competitors so we promise we're being extra extra safe whatever these companies say about it being too early for any regulation the reality is that billions of dollars are being poured into building and deploying increasingly Advanced AI systems and these systems are affecting hundreds of millions of people's lives even in the absence of
scientific consensus about how they work or what will be built next so I would argue that a wait and see approach to policy is not an option I want to be clear I don't know how long we have to prepare for smarter than human Ai and I don't know how hard it will be to control it and ensure that it's safe as I'm sure the committee has heard a thousand times AI doesn't just bring risks it also has the potential to raise living standards help solve Global challenges and Empower people around the world if the
story were simply that this technology is bad and dangerous then our job would be much simpler the challenge we face is figuring out how to proactively make good policy despite immense uncertainty and expert disagreement about how quickly AI will progress and what dangers will arise along the way you US Senate on AI insiders perspectives September 17th 2024 the good news is that there are light touch adaptive policy measures we can adopt today that can both be helpful if we do see powerful AI systems soon and also be helpful with many other AI policy issues that
I'm sure we'll be discussing today I want to briefly highlight six policy building blocks that I describe in more detail in my written testimony first we should be implementing transparency requir requirements for developers of high stakes AI systems we should be making major research investments in how to measure and evaluate AI as well as how to make it safe we should be supporting the development of a rigorous third-party audit ecosystem bolstering whistleblower protections for employees of AI companies increasing technical expertise in government and clarifying how liability for AI harms should be allocated these measures are
really basic first steps that would in no way impede further innovation in AI this kind of policy is about laying some minimal common sense groundwork to help us get a handle on AI harms we're already seeing and also set us up to identify and respond to new developments in AI over time this is not a technology we can manage with any single piece of legislation but we're long overdue to implement some of these basic building blocks as a starting point Thank you Mr Saunders Mr chairman ranking member Holly and distinguished members thank you for the
opport to address this committee for 3 years I worked as a member of technical staff at openai companies like openai are working towards building artificial general intelligence AGI they are raising billions of dollars towards this goal open eyes Charter defines AGI as highly autonomous systems that outperform humans at most economically valuable work that this means AI systems that could act on their own over long periods of time and do most jobs that humans can do AI companies are making rapid progress towards building AGI a few days before this hearing openai announced a new system gp01
that passed significant Milestones including one that was personally significant for me when I was in high school I spent years training for a prestigious International computer science competition opening ey's new system leaps from failing to qualify to winning a gold medal doing better than me in an area relevant to my own job there are still significant gaps to close but I believe it is plausible that an AGI system could be built in a as little as 3 years AGI would cause significant changes to society including radical changes to the economy and employment AGI could also
cause catastrophic harm via systems autonomously conducting cyber attacks or assisting in the creation of Novel biological weapons opening ey's new AI system is the first system to show steps towards biological weapons risk as it is capable of helping experts in planning to reproduce a known biological threat without rigorous testing developers might miss this kind of dangerous capability while opening ey has pioneered aspects of this testing they've also repeatedly prioritized speed of deployment over rigor I believe there is a real risk they will miss important dangerous capabilities in future AI systems AGI will also be a
valuable Target for theft including by Foreign adversaries of the United States while open ey publicly claims to take security seriously their internal security was not prioritized when I was at open AI there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other employees at the company to bypass access controls and steal the company's most advanced AI systems including GPT 4 no one knows how to ensure that AGI systems will be safe and controlled current AI systems are trained by human supervisors giving them a reward when they appear
to be doing the right thing we will need new approaches when Handling Systems that can find novel ways to manipulate their supervisors or hide misbehavior until deployed the super alignment team at open a ey was tasked with developing these approaches But ultimately we had to figure out as we went along a terrifying Prospect when catastrophic harm is possible today that team no longer exists its leaders and many key researchers resigned after struggling to get the resources they needed to be successful opening I will say that they are improving I and other employees who resigned they
will be ready in time this is true not just with open AI the incentives to prioritize rapid deployment apply to the entire industry this is why policy response is needed my fellow Witnesses and I may have different specific concerns with the AI industry but I believe we can find common ground in addressing them if you want insiders to communicate about problems within AI companies you need to make such communication safe and easy that means a clear point of contact and legal protection protections for whistleblowing employees regulation must also prioritize requirements for third party testing both
before and after deployment results from these tests must be shared creating an independent oversight organization and mandated transparency requirements as in Senator Blumenthal and Senator Holly's proposed framework would be important steps towards these goals I resigned from open AI because I lost faith that by themselves they will make responsible decisions about AGI if any organization builds technology that imposes significant risks on everyone the public must be involved in deciding how to avoid or minimize those risks that was true before AI it needs to be true today with AI thank you for your work on these
issues and I look forward to your questions this committee has developed a promising bipartisan framework and proposals that I earnestly hope will become law they are urgently needed to provide effective guardrails for AI syst systems my name is David Evan Harris from 2018 to 2023 I worked at Facebook and meta on the Civic integrity and responsible AI teams in my role I helped lead efforts to combat online election interference protect public figures and drive research to develop ethical AI systems and AI governance today those two safety teams do not exist in the past last two
years there have been striking changes across the industry trust and safety teams have shrunk dramatically secrecy is on the rise and transparency is on the decline since leaving meta I have helped craft two bills about deep fakes and elections in California that await the governor's signature working closely with policymakers in California Arizona and internationally I am more convinced than ever that effective oversight of AI is possible today there are three things that I hope you take away from my testimony first voluntary self-regulation does not work second many of the solutions for AI safety and fairness
already exist in the framework and bills proposed by the members of this committee third as you said not all the horses have left the B there is still time to act back to my first point voluntary self-regulation is a myth take just one example from my time at Facebook in 2018 the company set out to make time on their platforms into time well spent reducing the number of viral videos and increasing more meaningful content from friends and family the voluntary policy opened up a vacuum that Tik Tok was more than happy to step into today
Facebook and Instagram are fighting to claw back market share from Tik Tok with reals essentially those same viral videos that they sought to diminish when one tech company tries to be responsible another less responsible company steps in to fill the void while non-binding efforts such as the White House voluntary AI commitments and the AI elections Accord are posi steps forward the reality is that we've seen very little clear progress towards the promises made by tech companies in those commitments when it comes to policies and laws governing AI laws with shs rather than Mays are essential
without the shs the legislation becomes voluntary and many companies will delay or simply avoid taking meaningful actions to prioritize safety or harm reduction to my second point we don't need silver bullets the framework proposed by this committee's leadership already has so many of the answers two recommendations in particular in the framework are essential components for legislation AI companies should be held liable for their products and they should be required to embed hardto remove provenance data in AI generated content it is encouraging to see a bill on transparency in elections that would require labeling of some
AI generated material more steps like these are needed this brings me to my final point the horses have not left the barn the misconception is that it is too late to do anything it can be dizzying to watch the fast-paced releases of AI voice and image deep fakes but the growing role of bi and the growing role of biased AI systems making decisions about our lives but there are still so many more uses of AI technology that have not yet seen the light of day next come the realistic video deep fakes live audio deep fakes
that can interact with millions of people at once personalized election disinformation calls large scale automated sex torsion schemes targeting children those are just of the a few of the ones that we see on the horizon we need to move quickly with binding and enforceable oversight of a I it is possible if you take action now on the promising framework and bills already before you you can reain in the Clydesdales and the centaurs waiting just behind the barn door thank you yes uh perhaps because my ecoin metaphor resonated so much with you I could offer it
another story from the Animal Kingdom but this time ERS sign in nature there is a a metaphor that is used in the tech industry that I know of uh being used in in at least two of the very biggest tech companies and it's called the metaphor of the bear you senators are the Bear in this metaphor along with other regulatory bodies The Regulators and the tech companies in this metaphor are people running away from the bear as fast as they can in this story The Bear eventually catches up and it eats the slowest person eating
the slowest tech company thus the moral of the story if you are a tech company is just don't be the second slowest that is the strategy so as long as you can point to someone to another tech company that is doing a worse job than you are on trust and safety the idea is that is the optimal allocation of resources in your company um I think what's important here is that all of the tech companies not just one not just for example o Open AI we're not singling out one or another uh are engaged in
that kind of strategy let me ask uh Miss toner uh you wrote in your testimony that your quote experience on the board of open AI taught me how fragile internal guard rails are when money is on the line and why it is imperative why it's imperative that policy makers step in I know there are limits on what you can discuss public public I'm very respectful of them but maybe you can tell us what you had in mind when you wrote that sentence certainly Senator and uh there's now been enough public reporting of some of the
kinds of incidents that demonstrate this Dynamic um including uh situations like uh process known as the deployment safety board which was set up internally um great idea for something to try to try and coordinate safety between open and Microsoft when using open products um so been reported publicly that in the early days of that process um Microsoft they're in the midst of planning a very big very important launch the launch of GPT 4 um and Microsoft went ahead and launched GPT 4 to tens of thousands of users in India without getting approval from that deployment
safety board um another example would be uh that there's been since I stepped off the board there's been uh uh concerns raised from inside the company um that in the leadup to the launch of their 40 model which uh was the you know the Voice Assistant that had you know very very um exciting launch videos was launched the day before a Google event that open I knew might upstage them um and there's been concerns raised from inside the company about um their inability to fully uh carry through the kinds of safety commitments that the company
had made in advance of that um so there's you know there's additional examples but I think that uh those two illustrate the the core point Thank you m toner I just want to stay with you maybe pick up their um my understanding is that when you left the open AI board one of the reasons that you did so is you felt you couldn't do your job properly meaning you couldn't effectively oversee Mr Alman and some of the safety decisions that he was making um you had said this year that I'm just going to quote you
that Mr Alman gave inaccurate information about the small number of formal safety processes that the company did have in place that is that he gave incorrect information to the board to the extent you're able can you just at on this I'm interested in what's actually being done for safety inside this company in no small part because of what he told us when he sat where you're sitting thank you Senator uh yes I'm happy to elaborate to the extent that I can uh without breaching any confidentiality obligations um I I believe that you know when the
company has safety processes they announce them loudly and proudly so I I believe that you and your staff would be aware of um of the processes they have in place um at the time you know one that I was uh thinking of which was one of the first uh formal processes um that I'm aware of was this deployment safety board that I just discussed um and this um this breach by Microsoft that took place in the early days there um since then they have introduced uh a preparedness framework um which I think is you know
I want to commend many of these companies for taking some good steps I think the the idea behind the preparedness framework is good um I think to the extent they execute on it that's great um but there have been concerns raised about you know how how well they're able to comply with it they've also it's been publicly reported that um the you know really respected expert they brought in to run that team has since been reassigned from that role uh which you know I I worry what that means for um for you know the the
influence that that team is able to uh to uh exert on on the rest of the company um and I think that is illustrative as well of a larger Dynamic that I'm sure all of the witnesses here today have observed which is there're really great people and inside all of these companies trying to do really great things and uh uh the challenge is that if everything is up to the companies themselves and to the leadership teams who are needing to make trade-offs around getting products out making profits attracting new investors um those teams may not
get the resourcing the time the influence the um the ability to actually shape what happens that they need um so I think uh you know many of the many of the Dynamics that I witnessed um Echo very much what I'm hearing from my fellow Witnesses let me just put a finer point on it because Mr Alman as I said testified to us on this connection this past year here's what part of what he said we make meaning open AI significant efforts to ensure that safety is built into our systems at all levels and then he
went on to say and I'm still quoting him before releasing any new system open AI conducts extensive testing engages external experts for detailed reviews and independent audits improves the model's behavior and implements robust safety and monitoring ing systems in your experience is that accurate I believe it is possible to characterize the company's activities accurately that way yes the question is how much is enough who is making those decisions and what incentives are driving those decisions so um uh in practice you know if you make a commitment you have to write that commitment down in in
some words and then when you go to implement it um there's going to be a lot of detailed decisions you have to make about what information is shared with whom at what time um who is brought into the right room to make a certain decision um is your safety team you know whatever kind of safety team it might be are they brought in from the very beginning to help with um conception of the of the product really think from the start about you know what implications this might have or are they handed something a couple
of weeks before a launch deadline and told okay make this as good as you can do here I'm not trying to refer to any specific incidents at open aai I'm really referring to um again you know examples that I've heard reported publicly heard from across the industry um that there are uh there are good efforts um and I worry that we should not that if we rely on the companies to make all of those trade-offs all of those detailed decisions about um how those commitments are implemented that um they're unable they're just unable to fully
account for the interests of a broad public and I think you know you hear this as well from people you know I've heard this from people in multiple companies you know sentiment along the lines of ple please like please help us slow down please give us guard rails that we can point to that are external that help us uh not only be subject to these these Market pressures just in general is your impression now is was open AI doing enough in terms of its safety procedures and protocols to adequately vet its own products and to
protect the public I think it depends entirely on how rapidly their research progresses um if their most aggressive predictions of how more how quickly their systems will get more advanced are correct then I have serious concerns if they their predictions their most aggressive predictions May well be wrong in which case I'm somewhat less concerned in your written testimony you make I think a very important and helpful point about AI development in China and why the competition with China though real uh should not be taken as an excuse for us to do nothing could you just
amplify that because we've heard a lot of folks sitting where you're sitting over the last year and a half raise the China point and usually say well we mustn't we mustn't lose the race to China therefore it would be better if Congress did little to nothing you think that that's wrong just explain this to us why I think that the competition with China certainly a very important consideration and we should be keeping a very close eye on on what they're doing and how us technology compares to their technology but I think it is used as
an all-purpose excuse to not regulate and an all-purpose defense against any kind of Regulation I think that's mistaken on a few fronts it's mistaken because of what's happening in China they are regulating their sector pretty heavily they are you know scrambling to keep up with the US um they are facing some serious macro headwinds in terms of economic problems um access to semiconductors after us export controls um so China you know has its own set of issues we shouldn't treat them as um you know just absolutely raring to go and about to pass us at
any moment um and I think it also totally belies the fact that regulation and Innovation do not have to be intention this is a technology AI that consumers don't trust there's been recent consumer sentiment surveys showing that if they see AI in a product description they're less likely to use the product so if you can Implement regul regulation that is uh that is light touch that increases consumer trust um that helps the government be positioned to understand what is going on with the technology you can you can regulate in really sensible ways without even impacting
Innovation at all so it's irrelevant you know if if it's going to affect the race with China um as you're I'm sure aware the US AI safety Institute recently announced a deal with open Ai and anthropic who agreed to share their models with researchers before uh and after deployment for research for testing and for evaluation uh my first question is for Miss toner and for Dr Mitchell what do you think about this agreement thank you Senator P Pia um I haven't seen the details of the agreement I think in principle it sounds excellent I think
it's a great step forward I'm excited by the work that the AI safety Institute uh is setting out to do and I Echo Mr Harris's call for them to be as well resourced as they possibly can be um I think you know the success of an arrangement like this will depend on a lot of details about timing and access and what kind of assets are allowed to be accessed in what kinds of ways um so I'm I'm optimistic about it and and very pleased to see the agreement but uh you know we'll have to see
where it go goes from here someone mentioned open source is that a consideration for us what should we think about open source in this context of protecting the public Dr Mitchell thank you yeah and so this is something that I actually work on you know within my uh professional capacity um so there's this term open source that I think is used a lot um without fully understanding what it means um it's also used I think in situations where it doesn't necessarily apply the thing to recognize is that there's a gradient of openness that you can
have things that are more open or less open depending on foreseeable risks so for example um one of the things I've been involved with at hugging face is implementing gating such that you can't access a model you can't access a data set unless you've provided personal information unless you've taken a training and can show the certificate that kind of thing and so there's um an entire Spectrum from fully closed to fully open where I think we can do a lot of really good work depending on the specific foreseeable problematic uses of different models and data
sets if I could um sure so I've written extensively on this topic of Open Source and I think there are a number of important points the first one that I would like to caution you of in advance is that in the European Union in the lead up to the signing and the the finalization of the eui ACT a number of companies pushed very very hard to get full exemptions for open- Source AI from the provisions of the EU AI act I would expect if I were in your position that you will receive these types of
requests here if you have not already exempting open- Source AI from AI regulation is not appropriate that's because once you release an open AI system some people now uh even including in the in the federal government using a different term open weights that's perhaps a more appropriate term term the executive order on AI uses the term dual use models uh Foundation models with widely available model weights um there's a lot of terminology issues here I'll go with open weights I think that's the clearest one right now um also unsecured as a way to think about
it so when you when you release open weights unsecured models you never get them back it it will live on you can do things to to make it hide out in only the um hidden corners of the internet you can make it less available uh and and that the case with all of these open weights or unsecured models and uh unfortunately we've seen a number of companies take that strategy without doing significant safety testing on models and that's why I'm I'm very concerned about this and I believe that there should be no exemptions to any
AI laws for open weights or unsecured AI systems when we're talking about AVS that act in the world without human humans and they're using Ai and that brings about the difference between intelligence and agency between systems that think and systems that can act so miss toner let me come to you on this um when you look at this difference between Intel and agency do you see these as different concepts do they carry different threats should these be approached separately differently uh does this play into AGI tell me your thoughts on that thank you Senator it's
a good question it's a timely question um we actually have a a paper coming out introducing exactly this issue for policy makers in a couple of weeks um and what I would say is I think these ideas of agency or agents that can take actions autonomously is not at all separate uh it is a you know it's not the same thing as intelligence um but we are already seeing the ability uh of companies to take you know language models chat Bots like chat GPT and others and you know add a little bit of additional software
add a little bit of additional development time and convert them into systems that can go out and take autonomous action right now the state of these systems is pretty basic um but certainly talking to researchers and engineers in the space they are very optimistic they are very excited about the prospects of this category of system um and uh it's something that is very actively under development at as far as I'm aware all of the top AI compan companies with the goal being you know initially perhaps something like a personal assistant that could help you book
a flight or schedule a meeting um but ultimately uh you know Mustafa San formerly at at Google Now at Microsoft has talked about for example could you have an AI that you give it uh I forget the number something like $100,000 and it comes back to you a little while later with a million dollars that it's made because it's run some business or done something more sophisticated um at the limit certainly this is very uh very related to ideas around AGI and advanced AI more generally sort of the um the the founding idea or founding
excitement of the field of AI I think uh for many people has been the idea of systems that can take very complicated actions pursue very complicated goals in the real world um and and I I see a lot of again excitement in the field that we might be on on the path in that a number of you have mentioned whistleblowers and the need for protecting them maybe um anyone who would like could expand on that point uh you are all insiders who have left companies or disassociated yourself with them in one way or another and
uh I'd be interested in your thoughts Mr Saunders yeah um thank you Senator um so when I resigned from openai I sort of um found that you know they gave to every uh departing employee a like uh restrictive non-disparagement agreement and you would uh lose you know all the equity you had in the company um if you didn't sign this agreement where you had to effectively not criticize the company and not tell anybody that you'd sign this agreement um and you know I think this really opened my eyes to the kinds of um legal situation
that you know employees face if they want to talk about uh problems of the company um and I think you know there's a number of important things that like employees want in this situation um so there's like knowing who you can talk knowing who you can talk to it's very unclear sort of like uh what parts of the government would you know be uh have expertise in you know specific kinds of issues you want to know and you want to know that you're going to talk to somebody who understands um you know the the issues
that you have um and you know that has some ability to act with them and then you also want you know um legal protections um and this is where I think it's important to Define protections that um you know don't just apply when there's a you know suspected violation of law but when there's a you know sort of um suspected like harm or risk um imposed on society and you know so that's why I think um yeah legislation needs to include um you know yeah like establishing you know whistleblower points of contact and these protections
I think a core to the problem here is that the lack of regulation on tech means that that many of these concerning practices are not illegal and so existing whistleblower protections it's very unclear if they apply at all um and if you're a whistleblower potential whistleblower sitting inside one of these companies you don't really want to go out on a limb and take a guess that well is this enough of a financial issue that the SEC would be you know would cover me or do I have to go talk if it's something that's kind of
Novel related to AI development or other technology development um where you have serious concerns but there's not a clear statute on the book saying the company is breaking law um then your options are limited so I think the the need for whistleblower protections goes hand inand um with the lack of of other rules I think the point that you that you've just made is really important that the failure to develop safety and control features in a product is not illegal perhaps and therefore may not be covered by a strict reading of whistleblower laws even if
it is a practice which is unethical and harmful I think that's a very important Point we've seen EX examples already of our foreign adversaries uh using AI to medal in our democracy I think last month open AI revealed that the Iranian government link group had used chat GPT to create content for social media and blogs attempting to se Division and push Iran's agenda in the United States there have been reports that China and Russia have also used AI tools deceptively to interfere in our democracy I mentioned them earlier I think the threat to our elections
is real and we're unprepared Mr Harris you worked on a California law that seeks to safeguard our democracy are those the kinds of protections that you think would be effective at the federal level and is there more that you would add to the California law thank you so much for the question Senator uh I believe that in California it's difficult to make a lot of types of laws that could be made at the federal level there are a number of reasons for that one is simply the state agency infrastructure is dramatically smaller than the federal
agency infrastructure and in California in a situation of budget deficit uh it's very hard right now to pass any legislation that has any significant cost that I believe is one of the the biggest barriers uh to to passing the the legislation that we need and I I think that um you have in front of you in your framework the type of legislation in that California would not be able to achieve things like licensing and and registration and liability those would be very costly at the scale of a state to to enforce those and to be
honest uh it it would of all the states in the country Maybe be uh one that cost California more than many others simply because of the location of the technology industry there and there might be political conditions that make it harder in California to pass that type of of legislation again I'll I'll entertain any points about the California law that any of you would like to make and if not let me uh follow up Mr Harris In your experience there uh what was your what was your takeaway from the tech companies were they supportive helpful
how would you characterize their reaction thank you so much for the question I believe that you need to look at two different phenomena one is the outward presentation of the tech companies about legislation and then another is what's happening behind closed doors um I made a reference in my opening statement to the idea of shs and Mays I was surprised I I have been surprised in my work in the California legislature by the way in which tech industry lobbyists sometimes um hiding behind industry groups sometimes from Individual named companies are able to arrive at uh
at legislator's doors with requests to remove shells and replace them with maze to take legislative language that was very well-intentioned and at the 11th hour turn it into something that is meaningless it concerns me greatly what what I've seen in in California um there are political realities that if if draft legislation that comes from a civil society group like the one that I work have been working for the California Initiative for technology and democracy um that if those pieces of legislation are too bold uh sponsors of that legislation organizational sponsors will be told um this
isn't going to work and you're going to have to weaken it and and sometimes that comes in in many rounds of weakening and it can be very painful to watch Mr Saunders to the point you were just making one of the things we've said repeatedly is we have to have an online privacy bill that is federally preemptive before we start down the AI path because people want to be able to firewall their information and keep it out of the public domain and last week and Miss toner I want to come to you on this because
meta announced it was going to use public content and things from Facebook and Instagram in the UK and they were going to use this to train their generative AI models and um meta said their goal was to reflect let's see British culture history and idiom and that UK companies and institutions will be able able to utilize the latest technology and uh I think we all know what that means I'd love to hear what concerns you have over that announcement and what limits should we place on companies like Mata that are using this data that is
shared on their platforms to train their generative AI models thank you Senator Blackburn um my understanding of that announcement which I'm only I should admit have seen briefly but not dug into in depth um is that that was actually a practice that meta was already very much you know going ahead with in the United States due to the lack of privacy protections here and that the announcement last week was I think they need that privacy Bill indeed um I think I believe um and other Witnesses may know better um I believe that they had held
off on initiating that process in the UK due to privacy protections that that do exist in the UK um so to me this is actually an example of perhaps success on the UK's part if meta felt the need to be a little more thoughtful a little more deliberate a little more selective about the ways in which they were using British users data because of the legal protections that existed there which uh as you rightly point out do not exist in the United States I don't know if others want to add to that I'm happy to
add on that there was actually uh something on this same topic that really uh got a lot of attention at the beginning of the summer um a lot of users of Facebook and Instagram uh found that they could opt out of the process of of having their public data used for training of AI systems and then they posted instructions I saw this on both Tik Tok and on LinkedIn users had posted instructions how to go to the part of your Facebook or Instagram settings where you can opt out of having your data used I tried
to do it in Facebook I tried to do it in Instagram and I couldn't find the button and then I posted and said I can't find the button in the comments there and it turned out everyone with their IP address uh in the United States couldn't find the button because this was a feature that I I believe was only offered um to people in the EU and and perhaps in the UK I heard different different stories about which parts of the world but this idea that we Americans are uh you know second second class citizens
we don't even have the right to object that Europeans or or people in the UK have uh to object to our photos of ourselves photos of our families of our children being used to train AI systems and and at that AI systems that we don't even have confidence in they work we don't know if they will accidentally release personal information about us in the future make images that look just like us so I I I applaud you for raising this issue and um I'm excited to see bills like APPA U make progress so that we
have the foundations of of uh a legal system that can address that issue all of the witnesses that we have before us today have left companies AI companies based on concerns about commitment to safety um you're not not alone obviously open AI in particular has experienced a number of high-profile departures including the head of its super alignment team Jan Yan um Leica who left to join Aral anthropic and upon departing he wrote on x quote I have been disagreeing with open AI leadership about the company's core priorities for some time until we finally reached a
break point he also wrote that he quote believe much more of our bandwidth should be spent getting ready for the next generation of models on security monitoring preparedness safety adversarial robustness super alignment confidentiality social impact and related topics these problems are quite hard to get right and I am concerned that we aren't on a trajectory to get there uh let me ask all of you based on your firsthand experiences um would you agree essentially with those points let me begin with you Mr Saunders and and go to the others if you have responses yeah uh
thank you Senator um uh Dr Yan Leo was my uh manager for a lot of my time at open aai and you know I really respected his opinions and judgment um and I think you know what he what he was talking about were sort of a number of issues where open AI is not ready to deal with um models that you know have some significant catastrophic risk such as high-risk models under the preparedness framework so things that could actually like um start to you know Assist um novices in like creating biological weapons or like you
know the systems that could start um conducting um unprecedented kind of cyber attacks and so for those kinds of systems we're going to you know first we're going to need to nail security so that we make sure that those systems aren't stolen before we figure out what they can do um and used by people to cause harm um then we're going to need to figure out how do you actually like deploy some system that under some circumstances will you know could help someone construct a biological weapon but lots of people want to use it for
other a bunch of other things that are good so every AI system um today is vulnerable to something called jailbreaking where people can come up with some way to you know uh convince the system to provide advice and assistance on anything they want no matter what the companies like you know have tried to do so far um and so we're going to need to have solutions to hard problems like these um and then we're going to need to you know have a h h have some way to deal with um you know models that again
might be uh you know smarter um Than People supervising them and might you know start to like autonomously cause um certain kinds of risks and so I think yeah I think he was speaking to you know again just like a number of areas where the company was not being rigorous with the system that we currently have which you know again can maybe can amplify um some kinds of problems but once we reach the point where like you know catastrophic risk is possible we're really going to need to have our shut together