the robots are going to take over that's the fear isn't it with the evolution of artificial intelligence moving at an almost incomprehensibly Fast Pace it's easy to understand why we get preoccupied with this idea everywhere we turn there's headlines about AI stealing human jobs golden Sachs even published a report last year saying that AI could replace the equivalent of 300 million full-time jobs generative AI is more accessible than ever and workers are anxious a price Waterhouse Cooper survey from May 2022 found that almost onethird of respondents were worried about their employment roles being replaced by
technology in the next 3 years creatives worldwide are fearful that art as we know it faces an existential threat with the proliferation of AI and for the first time we're seriously asking whether human authenticity is a necessary part of the world anymore of course the worst fear is that artificial intelligence will reach a point of self-improvement so Advanced that it will become uncontrollable if the AI can teach itself and Achieve Superior intelligence to us mere mortals what will become of our future these doomsday scenarios are an important part of the conversation the truth is nobody
knows what will happen in 10 or 20 years let alone 10 and 20 minutes we can try to predict the path that AI will take but two short years ago we were all playing around with the first public release of chat GPT completely enthralled with its mere existence and now it's just a regular part of many people's lives besides we don't need to preoccupy ourselves with being controlled by robots there's plenty Happ happening right now that should raise some red flags generally speaking we think advanced technology is synonymous with sustainability but that's not often the
case there are always trade-offs the hope is that the technology is beneficial enough to society and the environment that the trade-offs are worth it and it might feel like AI exists out there in the cloud pinging our computers and phones when we need it and it's not wrong however as we all know it the cloud isn't just floating up in the sky ai's cloud is built of metal and silicon it's powered by energy and every AI query that comes through is a cost to the planet a team of 1,000 researchers joined together to try and
address this growing concern they created an AI model called Bloom which stands for biologically localized and online One-Shot multitask learning that emphasize ethics transparency and consent they discovered that training this environmentally friendly model used as much energy as 30 homes in one year and emitted 20 tons of carbon dioxide in comparison to a behemoth like chat gbt Bloom is small potatoes so AI researchers assume that bigger models like GPT use at least 20 times more energy the exact number remains a mystery though because tech companies aren't required to disclose information on energy consumption and not
to mention that the current Trend in AI follows the rule of bigger is better large language models like chat GPT and Google's Gemini grew 2,000 times in size Over The Last 5 Years with that growth comes inevitable and often undiscussed environmental impacts one of these environmental impacts is the amount of energy that computers need to process the large volume of information required to run these AI systems most of this energy is gotten from non-renewable sources which is only worsening our climate crisis if you want to do something about the climate crisis then you should check
out the sponsor of today's episode solar slice solar slice is a startup that lets you fund the construction of large- scale solar Farms accelerating the transition to clean energy all you need to do is sponsor a slice of their large scale solar farm a solar slice which adds 50 W of solar to the grid and reduces harmful emissions to measure just how much impact you're making their app allows you to track real-time data on your slices energy production and carbon savings as your slices generate clean energy you earn Eco points which you can then use
to buy more slices plant trees or fund other meaningful climate friendly projects to make even more impact you can share your progress with others create group impact goals with friends or send solar slices to your eco-conscious friends as gifts to learn more visit solar slice.com there you'll find a link to their Kickstarter Campaign which will help fund the construction of their first solar farm and the development of their app back to our story on the other hand the growing copyright issues surrounding how these AI models are trained have been discussed extensively simply stated copyright law
protects intellectual property and content from being used or sold without permission from the copyright holder until recently the implications were relatively easy to Define and prosecute when necessary with AI it's a different story recently open AI was called out for using YouTube videos to train its models these large language models need massive amounts of data to work effectively yes it's important that they can answer simple questions like what temperature to cook chicken at but perhaps more importantly they need to be able to generate coherent human-like sentences but how do they learn to talk like a
human from other humans of course but is it ethical or legal for a company like open AI to scrape online sources like YouTube that might not approve of such scraping open AI reportedly used its audio trans description model whisper and an attempt to get over the hump of hazy AI copyright law the model transcribed files from YouTube videos into plain text documents creating the data sources needed to train its AI chat Bots whisper transcribed over a million hours of YouTube videos uploaded by millions of users some of whom derive part or all of their income
from creating content on the platform open AI knew this was legally questionable but believed they could claim it was fair use of online content open AI president Greg Brockman was Hands-On in collecting videos used in the training and the company maintains that it uses publicly available data to train its AI models the scraping violated YouTube's rules which ban the use of content for applications independent of the site interestingly Google which owns YouTube knew about open ai's actions but didn't report them because they are allegedly doing some content scraping of their own for the Gemini AI
model YouTube isn't the only company that's pushing back against AI training in 2023 the New York Times accused open AI of stealing intellectual property ensued both it and in Microsoft open ai's financial backer for copyright infringement with this move the times became the first major American Media organization to sue an artificial intelligence company over its content being used to train chat Bots the suit called for companies like open AI to destroy chatbot models and training data that is copyrighted New York Times material it's the first test of legal issues around generative AI technology and could
have major implications for training large language models while the times understandably has issues with his Catal of 13 billion articles being used without permission News Corp which owns the New York Post and the Wall Street Journal has taken the polar opposite approach as of May 2024 the company has a multi-year licensing deal in place reportedly worth $250 million that Grants open AI access too much of its content open AI has also Inked deals with Fox Media in the Atlantic perhaps out of the harsh reality artificial companies like it will be facing moving forward all of
the major players creating these massive language model AI programs are starting to hit the limit of data available to train them Google now has a deal with Reddit to license content from the website to train Gemini Meta Even considered buying book publisher Simon and Shuster and its 100 Years of material outright so it could get access to all of its content while these companies fight it out over who gets access to what there are real implications for the people who create this content visual artists musicians and writers are watching their work show up in AI
texts and images this happens when an AI is trained on certain texts and images and learns to identify and replicate patterns in the data when the program is meant to generate music art or text the data it trains on has to be created by humans notable authors like Jonathan Fran and George RR Martin and John ginsum filed a lawsuit after learning that AI had absorbed tens of thousands of books actress and comedian Sarah Silverman sued meta in open AI for using her Memoir as a training text just like chat Bots it's difficult to identify what
art has been used to train these models because companies like open AI which owns the popular image generator don't disclose their data sets others like stability AI which owns the generative AI model stable diffusion are clear about which data they're using but they are still taking artist's work without permission or payment the legal recourse for artists is difficult experts are of two minds and some feel that this type of AI training infringes on copyright law but others feel it's still above the board and that the lawsuits will fail and the truth is that nobody knows
because we're in Uncharted Territory that once seemed like merely the subject of Science Fiction movies in the 2013 Spike Jones movie her while Keem Phoenix's character falls in love with an AI virtual assistant voiced by Scarlett Johansson 11 years later life is imitating art after open AI announced a new personal assistant called Sky it was easy to notice that his voice sounded a lot like Johansson's Sam Alman the company's CEO has noted that her is one of his favorite movies turns out he'd been courting Johansson to voice the new AI assistant but she declined the
offer after hearing Sky's voice jo Johansson threatened a lawsuit against open AI for actors politicians athletes or anyone else in the public eye it's easy to see how AI could completely upend someone's life if their image Voice or likeness is replicated that upending is already happening right now well it is clear that AI companies are knowingly pushing the limits of copyright law they're also inadvertently causing even more harm whether the companies are intentional about it AI models are inevitably trained on the discriminatory data littered across the internet AI models and Cody patterns and beliefs representing
racism sexism and other prejudices if these biases are deployed in settings intended for use specifically in law enforcement they can lead to tangible damage to innocent people for example if AI models are shown more images of white faces than darker skin tones they will have more trouble identifying features of dark skinned people if Police use AI to try and catch criminals the odds are higher that their systems will mistakenly identify dark skinned individuals more often or if AI is used to generate friends ex sketch the model will take all of the biases that's been fed
and spit them back out in the sketch prompts like gang member or terrorist will inevitably whip up a stereotype that could totally be off the mark the implications in law enforcement are easy to see but they're also much further reaching in healthcare computer aided diagnosis systems have returned lower accuracy results for black patients than white patients in job applicant tracking Amazon stopped using a highering algorithm after it saw that the algorithm favored words like executed and captured which were more often found on men's resumés AI biases perpetuate human societal biases and can come from historical
or current social inequality if you ask an AI to generate an image of a scientist it'll most likely show a middle-aged white man with glasses what does that say to young girls of color who want to be scientists these missteps Foster mistrust among marginalized groups and could lead to slower adoption of some AI technology the ethical issues aren't solely embedded in the training and use of these models they're happening right here in the physical world as well content moderation is a famously difficult job people sift through some of the worst images descriptions and sounds on
social media platforms online forms and Retail sites they ensure that disturbing scenes don't wind up on our screens or in our ears AI might be getting smart but it doesn't self- moderate Time Magazine did a deep dive into a company called Sama in January 2023 Sama provided open AI with laborers tasked with combing through some of the worst extremist sexual and violent content on the internet to ensure it didn't end up in the AI training regimen former s employees said they suffered post-traumatic stress disorder while on the job and after sifting through these horrific things
to make matters worse employees mostly located in Kenya were paid less than $2 an hour the company claimed it was lifting people out of poverty but the time article described claims of the work being torture individuals regularly had to work past assigned hours and despite some Wellness services offered to them many experienced irreversible emotional effects the narrative that AI can eliminate workers is true but the workers it takes to make AI possible are still suffering so what's the solution is there one for artists a company called spawning created a tool that can help them better
understand and control which art ends up in training databases the company stability AI does train its models on existing text and images available online but it's looking at ways to ensure that creatives are paid royalties for using their work another tool called code carbon has emerged which runs in parallel to AI training and measures missions this might help users make informed choices about which AI model to use based on how sustainable its operations are these are important and worthy starts but no single tool can solve such complex issues by creating tools that can measure AI
social legal and environmental impacts we can start to understand how bad these problems are this hopefully can lead to creating guardrails and Advising legislators on how to develop new regulations on artificial intelligence it might feel like AI is moving quickly and that's because as it is the existential worry about robots taking over is a fun and scary one to entertain however we do have real issues centered around our potential digital overlords happening as we speak it's not too late to find ways to create an artificially intelligent world that we all want to live in but
users and companies alike have to decide that path together [Music]