[Applause] Humanity's fear of technological progress is longstanding if you're living during the Industrial Revolution it's very likely that the automation anxiety you would have been feeling mimics contemporary headlines that we see however past inventions once considered existential threats did not materialize as such while AI does introduce comple complex risks labeling it an existential threat is premature and it lacks a solid foundation experts disagree about how AI will develop what the timeline is and what the level of intelligence that it can achieve however history underscores our very limited ability to foresee a Technology's exact applications our
epistemic Horizon is incredibly limited we cannot predict the consequences of our actions in the medium term let alone the coming decades or or centuries as Kagan noted there will always be a very small chance that some unforeseen disastrous or fantastically wonderful thing results from our actions the proposition mustn't only show that and I quote a threat exists that is not the motion the proposition must show why it is necessarily the case that the disaster disastrous effects are certain branding AI not only a globally catastrophic risk but an existential threat one second this extreme stance is
excessively pessimistic and epistemically indulgent imposing a substantial burden of proof the opposition acknowledges the potential for ai's negative effects but insists that a claim is absolute as an existential threat requires far more empirical evidence and epistemic certainty than the proposition has provided to begin in typical PP fashion let's deconstruct the key terms in tonight's motion AI is an existential threat existential breaks down into the physical and metaphysical aspects first I will argue that AI doesn't threaten the physical existence of humanity because we possess both both the incentive and the capacity to implement preventive methods through
design and regulation secondly I show that AI doesn't pose an existential threat to the experienced and shared understanding of what defines Humanity also note that the motion is in present tense while the arguments presented by the proposition are founded on future developments of AI powerful AGI an intelligence explosion or super intelligence all revolve around whether AI will become an existential threat not that in currently is these arguments rely very heavily on hypothetical and Abstract scenarios rather than being grounded in empirical forecasts nevertheless for AI to be a current existential threat the proposition must show number
one that AI developments will indeed track these alarming scenarios and that it is necessarily the case that they will and that we are unable to effectively prevent such risks I will attack this second proposition firstly let's address AI design AI systems don't have motivations or intent they operate based on pre-programmed rules and objectives that we give them and the data inputs that they learn from these are elements that human developers can control and constrain for example misinformation such a terrible issue if we aim to ensure accurate and reliable responses from an AI system we can
limit its data to highly reliable ones peer-reviewed books academic journals developers can design boundaries and safety features to prevent AI from being programmed to cause harm or to pursue destructive methods for achieving a beneficial goal information aligning AI with fundamental human objectives such as Equity fairness anti-discrimination and sustainability is crucial to ensure that we're getting what we actually want not just what we ask for and active alignment research is already underway at the major companies Sulton pointed out that the probability of aligning AI 100% and removing all risk is very small but it's not the
case that we expect zero risk with other technologies that we use just an example nuclear energy following nox's principle of side constraints we can impose limits within which AI can perform tasks a self-driving vehicle programmed to follow traffic rules and only drive on public roads is limited in how it gets you to work quickly humans not only control the design of AI we also provide AI with the means it can use a self-driving car only functions if we choose to fuel it therefore humans exert significant control over a by carefully considering the means that we
offer it and the system that we choose to introduce it into taking this back to First principles AI lacks Consciousness and understanding it is therefore a tool that is relying on some degree of human direction if there is an existential risk with with AI it's not the systems themselves but the humans behind them who pose the threat by regulating and mitigating destructive human actions AI is not an existential threat moving on there is both an incentive and a capacity for AI regulation politicians lawmakers and companies worldwide have the incentive to invest in and collaborate on
regulatory systems why is that the case because of the reach and extent of AI to potentially fundamentally reshape our institutions our public systems our Industries and our daily lives growing public awareness and growing expert discourse on the topic increases the political salience of AI among electorally significant groups incentivizing a regular regulatory response substantial attention to AI related concerns should in fact alleviate our apprehensions secondly there is also a capacity for regulation it is exclusively up to people to establish the rules for how we wish to use Ai and how to let it interact with Humanity
robust government oversight transparency requirements liability for AI developers and interdisciplinary collaboration are required similar to International agreements on nuclear non-proliferation and bans on chemical weapons we can establish conventions to prohibit and heavily regulate autonomous weapon systems currently very few players possess the cuttingedge Computing resources necessary necessary chips and hardware and financial Capital to develop and train powerful AI a significant advantage to regulating only a few players is that it's more feasible a smaller number of actors facilitates their identification monitoring and the alignment of their interests the model charted by these first movers in Military and
civilian AI regulation determines the incentives that subsequent countries and companies will face in developing AI we have the means to enforce preventive measures and avoid AI becoming an existential threat in the final section of my speech I explore right AI is not an existential threat to the philosophical underpinnings of humanity I want you to take a second to think about what makes you human AI doesn't threaten our Consciousness our intentions and will our moral propensity or our creativity AI cannot compete with these distinctively human features no matter how intelligent it becomes the experience and shared
understanding of what we consider to be humanity is not under existential threat let's delve deeper into creativity as it was mentioned AI art right before what is creativity is it generating something novel by combining existing patterns of information while AI does satisfy this definition AI generated art lacks expression and artistic value creativity encompasses more than the this rudimentary definition people or processes are creative to the extent of and because they produce creative products products are creative on account of three conditions they are novel they are valuable and they are created with agency generative AI products
are not creative in the same way that a snowflake is not you know it's new it's Unique it might even have aesthetic value but it lacks agency in its creation art goes beyond aesthetic value it involves artistic intent authenticity the artistic process and the expression of human emotions and experiences AI doesn't analyze art and then create its own it merely identifies patterns and replicates existing Styles without actually conceptualizing or contextualizing its Creations as art this means that the AI art you know it might sell for a lot of money but it lacks artistic responsibility it
cannot form genuine artistic expression you know the creative process is fundamental to Art it's not just about whether we're using semi-automated processes or even technology as a medium but it's about the absence of an artistic reaction or even an intent in the process and that's what makes art valuable you know we value art that responds to something because it connects us to the lived experiences of other people so AI doesn't threaten creativity a distinctively human trait nor the true value of human creative output I'm at my conclusion I don't think I have time sorry so
to conclude AI does not pose an existential threat to humanity be it in the literal or the metaphysical sense human Direction in AI development is Paramount and we possess both the incentive and the capacity to implement preventive measures for AI risk through design and regulation furthermore distinctively human characteristics will continue to Define our Humanity alongside ai's advancement all powerful tools carry risks the goal is not to get to no risk but it's unjustified to extrapolate extrapolate these risks to an extreme there is a difference between a globally catastrophic risk and an existential threat and the
proposition has failed to demonstrate the necessity of the former let alone the latter thank you