so one of the first stories that we had that was absolutely insane was the fact that we had Sora get leaked to the public now of course it wasn't the actual tool that got leaked to the public it was simply an API key that was wrapped in a hugging face chat so this was what a lot of people saw a few days ago when we were looking at the article because essentially what happened was a bunch of angry artists or should I say one artist decided that opening eyes Alpha program or beta program was something
that they didn't want you can see right here I know some of you guys probably have seen this already but basically they were like we received access to the sore with the promised to be early testers and somehow they you know I don't even get why they were angry but they said we are not your free bug testers PR puppets training data validation tokens and we do all this work for free they were basically saying oh you're exploiting you know artists for unpaid research and design and PR but a lot of people that I know
just didn't really understand this because it's like you probably got invited to this program and I know tons of artists out there that would have loved to have been selected for this program so it's like I'm not sure why the people who were in this program decided to get uh upset I mean they were upset at things like every output needs to be approved by opening I team before sharing I mean of course they they've got like generative uh you know images like if you had a company that was making generative AI music or you
know generative AI images surely you wouldn't want some terrible images coming out um where people could say that your software is bad especially in the competitive AI space this thing just didn't make sense to me at all I don't know why they were you know you know angry at this I mean I guess I could understand it maybe like 10% but yeah thing didn't make sense the actual big news about this wasn't the fact that opening ey is some evil company or whatever the big thing about this was the fact that like there is actually
uh certain things going on with the Sor model which is more interesting than this artist drama so that kind of news was basically the fact that this model has different styles you can see here it has a style called natural you can see there's also probably some in painting you can see it says in paint item most interestingly as well we can see that the model that we actually got was a turbo model so it does mean that it's quite likely that when Sora is released there will largely be a turbo model and probably a
more expensive model that can produce certain videos and one of the craziest things that we also did get from this was the fact that like Sora is something that is a lot better than other models that currently exist which is quite surprising because uh a lot of people would say that clling and you know all those other models are really better but um I think it was Ethan mik on Twitter that said look I've tested tons and tons of models but this model is completely better than Sora by a little decent margin so that was
something that was really surprising as well because a lot of people were like look opening eye is levels behind now they managed to lose the game but it's clear that they're still in the lead and another thing as well you guys can see right here this video gen part we can see that this says video gen so we know that if this is you know type video gen it's quite likely that we might have image generation here which means that it's possible that with Sora we also get an image generation model I don't know if
opening ey will actually release that because a lot of the times open ey is such a big company now that every release is uh it's it's under scrutiny like another company could do something and it wouldn't face much as scrutiny as opening eye because they just have such huge brand recognition like if you remember the first time they released Sora a lot of people were like why does anybody need a generative AI tool it just rips the artistic creativity out of humanity and so I'm wondering if they release you know this image generation tool if
they'll face the same kind of backlash so will be interesting to see uh what happens on that aspect now another thing that most people did Miss was the fact that Google actually released their Google experiment Labs um and it was called generative chess turn your ideas into playable art pieces using Google's image and 3 Model create and play today so this was something that you know I I I really do wish that I was a more creative individual at the time of uh creating this demonstration but basically what you can do you can play chess
and you can generate the chess pieces with anything that you do want now like I said before I do wish that I was a little bit more creative because you can see that what I did was I decided to make was I decided to make a uh generative uh chess game that was uh you know Minecraft inspired I don't know why I thought of Minecraft immediately um I'm guessing cuz I saw AI generated Minecraft but I was just thinking about video games and then I typed in fortnite I know this is probably like uh really
really probably the worst example I could have used I mean I could have used medieval I could have used like a ceramic I could have used a wooden I mean there's a billion different things so you can see right here um it's able to generate these uh you know the these these chest pieces here and I guess it's just an interesting way to use generative AI because we know that Google is you know the company that created notebook Alm and they're really really ramping up their efforts to you know use generative Ai and wrap it
in certain different ways so for me I thought this was something that was really cool I guess from this you could do I mean there's a billion different ways Maybe could do race car inspired um you know chess pieces I have no idea there there there's a billion um different things that you could do but Google Labs I think they have the most interesting AI products so this is going to be something that I'm always recommending to viewers because if you were in the Google Labs area I'm pretty sure you would have had access to
notebook LM um before a lot of people managed to do that so it's something that is really interesting considering where we are going to be now let's get on to something that was probably one of the biggest announcement ments this week so another big announcement and I can't believe I didn't even cover this yet because it's a pretty pretty big announcement not a pretty big announcement it's a a really big announcement was the fact that we actually got anthropic releasing mCP which is a framework that allows claw to run servers giving it superpowers and effectively
CL turning the clae app into an API and they created some server that they think you'll love so you know Claude can read create and edit files and folders locally which is is pretty pretty insane so you can see right here um it's able to create uh you know libraries and structures and it's able to do that for you all locally so this is going to open up a huge level of areas for usability in terms of what users are able to do with these models so I'm someone that's you know slowly slowly diving into
the space because I want to get the most out of what these models are able to do and one thing that I do know is that every time these features are incrementally released least we get a ton of more use cases that most people aren't currently aware of so in this example you can see someone is creating something with this demo right here and you're going to be able to see them to create all of these files locally you can see it's running all of them and you can see this is where the user managed
to create a flask application locally so this was really crazy you can see that the user is able to run this and you can see they've created a paint application locally that is absolutely insane so this is just making stuff a lot more easier in terms on the you know software development side because when we have Claude that's able to talk to a user and is able to guide them step by step and is able to navigate them through all of the areas where you want to be able to do stuff this is opening up
so many possibilities I know this is literally going to help out so many people because so many times when I was doing tutorials on stuff on how to do stuff locally a lot of people got confused and I mean it is kind of a little bit weird I know for some people it's easy for some some people it's hard but this is going to be something that I'm definitely going to be making some tutorials on so that individuals can get the most out of this so it's going to be something that is really really really
effective and it's definitely something that you guys should explore but I will be releasing a tutorial in around 3 or 4 days that dives into absolutely everything for beginners so that you guys can benefit from this the most so then we actually got the Lord Knight of wouth speaking in the House of Lords and basically talking about artificial super intelligence now after this I'm going to talk quickly about artificial super intelligence cuz there's an ongoing Twitter discussion that I think is rather important AI could pose an Extinction risk to humanity as recognized by world leaders
by AI scientists and leading AI company CEOs themselves AI systems capabilities are growing rapidly super intelligent AI systems with intellectual capabilities Beyond those of humans would present far greater risks than all existing AI systems currently so I'm going to go off Off Script here just a quick bit because I just want to discuss this AI safety thing now this isn't to create any drama but I genuinely want to understand where people's mindset is when it comes to artificial superintelligence and I'd like to know what the broader Community thinks about AI safety so basically chubby tweeted
I reached a point where I think we should just always do the exact opposite of what Jeffrey hindon tells us it's a shame I have so much respect for the work he's done and then of course we got David jair is saying he's a useful idiot yada y yada and people are saying he's focusing on regulatory capture now if you don't know who Jeffrey H in he's basically the Godfather of AI he is a well respected individual in the field of AI now this guy has you know recently I think he got you know the
Nobel Prize for his work with your networks it was along the lines of that but the point is is that like a lot of people are stating that he's wrong when he states that open- sourcing bid big models is like being able to buy nuclear weapons at waros Shack and I genuinely agree with this I'm going to put this question out to you guys why is it that we need AI models that are able to do certain things for example I said question let's say future models are capable of mass Gale harm for example bioweapons
and such why would we initially open source that what do we gain as a country or as a society if we open source that I genuinely cannot find one argument for open sourcing big open models I'm not saying that the open source ecosystem needs to be terrible all I'm saying is that when we understand where this technology is going to be in the future if we're stating that look artificial super intelligence is a true thing and we're going to have these beings that are basically going to be able to do anything and things that are
magic why would we open source models that are able to cause catastrophic harm that's the only question I'm not saying that we can't have an open source ecosystem I'm not saying that we shouldn't have open source and transparent models I'm just saying that the dangerous parts of the models why would that part need to be open sourced considering the fact that those parts can only be used for adverse things that's just my question to you guys now coming back to this I actually wanted to talk about the AI timelines I really do wish I had
a better higher quality image here but I'm going to show you guys this because something actually crazy happened so what happened was that you guys can see here that samman has AGI at 2025 Elon Musk has it as 2026 Dario amade has it at 2026 Ray Kel has it at 2029 Jeffrey Hinton 2029 and Demon sarus 2030 now this is important because these guys are key key individuals when it comes to shaping the AI space so you could argue that these individuals have a lot more information like I said before apologize for the image quality
but okay one of these people isn't here now one of the people I'm refering to is Yan Lin and the reason I'm talking about Yan Lin is because he is a notable AI skeptic okay and he's very skeptical of the current I guess you could say uh you know Paradigm that we're currently in with llms and stuff like that he basically says that you know everyone's running off the wrong Paradigm now recently I think maybe they may have had a breakthrough because this guy actually changed his timeline okay he actually changed what he initially believes
to happen with regards to the timeline so take a look at this cuz this is crazy and this is insane because when we actually take a look at individuals like Yan Lun who are some of the most prominent AI Skeptics with regards to the current Paradigm him stating that his timeline is now basically the same as Sam alman's or any of these industry leaders basically means that we are most certainly going to get Advanced AI a lot before we think we do so the the future is that if we succeed in this plan which may
succeed within the next five or 10 years you know 5 to 10 years we'll have systems that as time goes by we can build up to become as intelligent as humans perhaps so reach human level intelligence within a decade that may be optimistic all right um 5 to 10 years would be if everything goes great all the plans that we we've been making will succeed we're not going to encounter unexpected obstacles but that is almost certainly not going to happen you don't like that right like AGI and human level intelligence you think is far far
away or unlikely no I I don't think it's that far away and I don't think my opinion about how far it is are very different from what you will hear from some Alman or deis things like this um it's you know quite possibly within a decade but it's not going it's not going to happen next year it's not going to happen in two years it's going to it's going to take longer and so you don't want to extrapolate the capabilities of llm and and say we're just going to scale up LM train them on with
bigger computers and more data and you know human level intelligence intelligence is going to emerge this it's not going to work this way we're going to have to have those new architectures those jaas systems that learn from uh from The Real World um and can plan hierarchically so yeah that clip was a really vast ating one because it was interesting to see you know Yan Lun finally actually talk about his predictions for AI in terms of his timelines and we also got Al TTS which is an experimental text speech model that uses a pure language
modeling approach to generate speech without changes the foundational model itself sometimes I think about how fast Everything Changes like one day you're using um floppy disc and then the next everything is in the cloud it's wild right I mean how do we even keep up with all this Innovation um um so what was the inspiration behind your latest project like was there a specific moment where you were like yeah this is it or you know did it just kind of uh come together naturally over time so yeah it's really interesting to see exactly what the
open source space is doing and these kind of things that you can run locally are going to change the space in terms of not having to rely on external providers now I got to be honest man like these things that keep happening in the space where you get these models that are really good that you're able to run locally I'm actually wondering like at what point a model is going to become so good that we can literally just run everything off a computer I mean recently we've seen things like llama 3 and you know smaller
models just get more compact they become more smarter but like at what point are we going to reach that point where okay let's say for example you know chat gbt and all of those things we're just going to have a default version that is just completely Offline that we can use absolutely anytime we want I mean of course there's pros and cons to each I think still you know we're going to have you know one that is able to search the internet up upload files and you know be on servers and all that stuff but
I think you know there's going to be a large use case of people that once these models get good enough to where they're actually really effective on device we're going probably have like this entire system where you can just simply download it and you can just use it completely offline now rof in Van Rin is actually building an implementation of the global workspace Theory Of Consciousness in an AI system and says that machines could be conscious in 5 years and it could very well happen next year and this is super interesting because there's actually a
company that I recently saw and their entire thing is actually focused on making machines conscious which is completely crazy because so many people are working on things where they saying look we're going to wake up Claude and all this you know interesting stuff so I think uh AI Consciousness is going to be one of the biggest things within the next 5 years as these models get increasingly increasingly complex the hope is that in 5 years we would have a full working implementation of the global a space theory in a non-trivial situation so not exactly like
like what you see here um and maybe five years from now we would be able to assess whether this particular Theory and the particular way that we have implemented gives rise to emergent properties and possibly Consciousness even though we're not actually trying to design it we're aware that it might that it might happen um so five years is is the time scale that I would be could give you for this particular project now more generally speaking if you want to put a time scale on the potential emergence of Consciousness in AI systems um I think
the range is much wider than this it could it could very well happen next year or in a couple years with the next installment of GPT 5 I wasn't I wasn't really kidding um I think there's there's a real possibility there well some people even think that the current version is already somewhat conscious and uh and it could take you know 20 years or more um because it could very well be that all of our theories of Consciousness are wrong and that just need some new idea right sometime in the next Century or so there
could be a 5-year period when the economy goes from doubling roughly every 15 years like it does now to doubling every month or faster that's on the table as something could happen so that's a kind of change I think is a plausible change but that would be a worldwide change that is the entire world economy would start to double roughly every month so if that's what you mean by The Singularity I say that's not crazy that's different than one in a basement suddenly over a weekend become so powerful to take over the world that's an
entirely different kind of Singularity scenario that's the one I don't find very plausible but I do think it's plausible that sometime in the next Century we will hit this transition where the economy speeds up to double roughly every month or so and AI is very plausibly the kind of thing that would cause a transition like that that transition would probably go along with most humans losing their job and we should prepare for that this is the one risk that people have most consistently talked about with machines getting better for centuries there's a way we could
prepare for that this is a simple Insurance problem and it's even simpler than most insurance so most insurance like if you want to get your house insured against a fire they have to do what's called underwriting they have to come and look at your house and guess the risk that your house will have a fire compared to somebody else's house otherwise they're they're at you know too much risk if they don't bother to check those risks but here we're looking at an event that would be common worldwide so I would set up a trigger something
like labor force participation rate I.E the percentage of adults who work Falls from say above 60% to below 20% in say a 10year period if that happens that would be a sign a signature of robots took most jobs in a short time that would be a triggering event and I would say I would basically just have assets that pay out in that event a bet basically so this is basically one of the most interesting things I've seen this is Robin Hansen basically saying that you know sometime in the next Century AI is going to cause
the economy to speed up so quickly that most humans are going to lose their jobs and should ensure against that scenario so if you're a buddying entrepreneur who is thinking about the kinds of businesses that are going to be in the future that might be the kind of business that you want to build I think this kind of thing is really smart because you're basically placing a bet on the fact that AI is going to get a lot smarter which is a pretty sure bet now this thing of AI taking a lot of people's jobs
is not new news this is something that we've been talking about on the channel for quite a long time and I do think that this is something that makes sense now recently there have been a slew of different tweets that have actually shown us just how crazy the I guess you could say technology is getting to where individuals who are using the technology in their respective Fields have seen the AI cross certain thresholds for example we can see right here Ethan mullock says if progress continues the ability to figure out the AI Frontier will slipped
from most of us for example I'm not good enough I'm not a good enough musician or critical listener to know ifov V4 is actually as good as it sounds to me I need to defer to experts is it basically saying that look how like like the way how these systems are we're not going to know whether or not as an average person if this you know has passed the threshold for you know a specific level because I'm not an expert in that field and that basically means that as these systems get better and better unless
the experts are looking at them we're not going to know how good they are and interestingly we had a response from someone called B Camp he said professional here I've gotten over 100 million streams a gold record 10 years of teaching songwriting at Berkeley College of Music and musically it's better than 80% of my students but my best students Beat It by Miles the industry's best also wins and it's ready to eat service music like advertisement and the library so right here we can see that you know across all bounds it is really changing the
game in terms of where this technology is heading and I can only imagine what it's going to be like in 10 years if it is so good now we can also see right here as well someone posted this this one garnered a decent amount of attention over 500,000 views 200 likes 3,000 likes and over you know 2,000 bookmarks and he said Claude 3.5 Sonic with about 20 hours of customization work is better than every Junior and most mid-level media buyers strategists I have worked with in the 5 years and I assume it will be better
than 80% of senior people the AI isn't coming for advertising it's already here now I do think that this is a really interesting comment because I've used certain models and over time I've seen that certain models continue and continue to get better when they manage to get trained on newer data and this is something that I've seen for certain tasks like sometimes I'll be writing a piece of information on a website or I'll be you know filling out the description for one of my videos videos or I'll be just doing some general online business stuff
and sometimes clae will really really surprise me at how effective it is like it won't need so much prompting and it just outputs a response that is genuinely really really effective to the point where I wouldn't need to hire someone and this is something that is slowly slowly happening and a lot of the times because what we have is qualitative benchmarks meaning that it isn't like a math benchmark where you can you know test it across a range of set problems but because the kind of problems that this kind of thing solves is something where
is I guess you could say it's subjective in nature it means that this kind of thing isn't going to stun people once it happens it's just going to slowly slowly get better in the sense that it manages to get better over time and people rely Less on actual humans for their work and this is where we also get an interesting prediction from someone who works at open AI so we got this prediction from Jason way and he says prediction within the next year there will be a pretty sharp transition of focusing you know in AI
from General user adoption to the ability to accelerate science and engineering for the past 2 years it has been about user base and general adoption across the public of course with chat gbt and with everything else that's been done this is a very natural because user growth is a critical part of any business model but at this point I would say that there is a widespread accessibility of llms for most queries from the average person on Earth and many llms can answer that pretty well he's basically taking a look if you take you know gbd4
you know uh llama if you take mistal if you take all those models and you give them to the average person they're probably not going to know the difference but he says that in the upcoming five years I think that the focal point will be the ability for AI to accelerate engineering and scientific research which is the engine of progress in technology and at the frontier of innovation in any field and by definition there will be many open questions and a lot of headro For Better AI to make a difference and the stakes will be
very high because progress compounds and also because AI is accelerating AI research itself is a strong positive feedback loop he's basically stating that look the other way of saying this is that there is a somewhat limited Headroom for improving the average user query but massive headro for improving the experience for the 1% of queries that would accelerate technological Improvement as well as on queries that people would want to ask the model but currently don't because the model aren't smart enough to answer in AI research tends to improve where there is great Headroom and scientific innovation
there could be substantial upside basically saying that look we've kind of maxed out what these models are able to do for the average person which means that you know increasing the you know strength of the models in areas where the average person gets more out of them won't be something that is going to be the industry's main focus what they're going to be focusing on is the kind of ability where these models are going to be so smart that they can solve you know maybe an engineering problem maybe they can advance scientific research to where
if it actually makes a different and actually you know starts accelerating that kind of research that is where they're going to be placing their focus because that kind of application yet there's so much room for improvement to the point point where if they can attack that market they can gain a lot of market share and they can actually you know accelerate scientific discovery and engineering and this is going to be something where they're really headed and it makes sense because all of the models currently are really good enough to basically do what anyone wants if
you want something like let's say for example you know a voice bot that is able to talk with you in in realistic ways opening eye already have advanced voice mode you want a chat bot you've got a chatbot there if you want like a virtual you know person that's able to look at you and in a zoom call that technology is there all of those ogs you know are going to get maxed out completely next year so the next area which is really interesting is of course that area for scientific innovation now Elon Musk also
said something really interesting he said that Optimus is a very sophisticated robot it will be immensely difficult to get the cost to $20,000 not easy at all but eventually will happen and production volume needs to be above 1 million Bots per year for that and price will be driven by demand but eventually the price for the Optimus Tesla bot will get to $20,000 so it's going to be interesting to see um how much these robots managed to drop I'm actually really really intrigued because recently I'm not sure if you saw the video on my channel
where the Tesla bot managed to do something really incredible and I was really wowed and stunned by its ability to catch a tennis ball out of midair twice with its new hand so I'm wondering now is that kind of Technology going to be under $20,000 I mean El mus says there's more actuators than a car so it could be much cheaper than a car but I will be loving but I would be really intrigued to see exactly what price happens then of course we had nvidia's actual then of course we had Nvidia producing the okay
let just do one then of course we had Nvidia producing the world's most flexible sound machine you can use text and audio inputs and you can generate any combination of Music voices and sounds this is from Nvidia and they've just completely done it again fugato is the latest generative AI breakthrough from Nvidia this new model allows you to create sounds speech and music from text and audio inputs you can guide fugato to create unexpected sound effects where Familiar sounds take on surprising new qualities and invoke new [Music] experiences or direct immersive and shifting soundscapes for
film or audio Productions [Music] [Music] instructing fugato to extract audio elements from a sound clip such as isolating a voice track in a piece of music is just as easy [Music] be there fugato also allows you to generate new speech samples kids are talking by the door and if you want a different delivery fugato can do that too kids are talking by the door kids are talking by the door fugato also lets musicians experiment with existing audio by adding new instruments [Music] or completely changing the style of a Melody they [Music] wrote you can also
dream up unusual instrument combinations like this or explore entirely new Realms producing sounds that bring Creative Concepts to [Music] life fugato is a groundbreaking Foundation model that gives you Sonic superpowers opening up new possibilities for creativity and production so then something that was really cool was we got an updated version of luma dream machine so this is something where we got version 1.5 it's a new model that is really really effective they actually produced a really stunning ad that basically goes to show how if you want to create anything you're able to do so with
their amazing tool so this is something that I think once again is just going to increase the creativity I've used many different video models but I don't know what it is about Luma laabs AI their user interface is just really really easy to use even for those who haven't done any kind of video editing before so it's the tool that I would recommend to anyone who's just starting out of course if you're you know someone who's Advanced you can use tools like Runway but for those of you who are just beginners and you really want
to just play around with the video World Luma Labs is fast it's easy it basically you know gives you all the camera angles it's really really effective so this is a tool that I would recommend to you if you're trying to use dream machine now in terms of other AI news this is something from Jonathan Ross so this is a guy who is at Gro and if you don't know what grock is this is the company that is trying to BAS basically have these llms have incredible amounts of inference so his goal now is to
get Gro to 25 million tokens per second by the end of the year now I don't even know if that's possible I don't want to be the person who's doubting what they can do in AI but that's an incredible incredible thing say like I genuinely don't know if this is going to happen only because it's just such an incredible feet like they're already at an incredible speed right now I mean if you actually take a look at what they're able to do you can see see that when you you know have and basically I'm going
to show you guys in this quick demo what you're able to do with Gro with a you know a quick voice activated request okay so this is what the future of llms are going to be like I know right now things are not slow but like you haven't really understood how fast things are going to get in the future which is why this kind of Hardware is just going it's just going to change the game hi I'm going to Atlanta for superc compute this year I'm going to be there an extra day can you come
up with an itinerary can you put that in a table can you add a duration column can you move the duration to after uh the time remove the end time turn duration into minutes and move the stop to the far left that's great but you know what I changed my mind let's go to New York and let's go back to Atlanta so yeah the reason I think this is absolutely pretty incredible is because we have a situation on our hands where not only are we moving towards this you know Paradigm of these models thinking a
lot of that time that requires a lot of tokens and with this grock you know Hardware we're going to be able to literally burn through all those tokens really really quickly which basically just means that overall we get systems that are much more faster which leads us to more overall efficient systems you can see he says the reason for that number 25 million if we do 25 million tokens per second we're going to have as much compute capacity as a hyperscaler had at the start of this year and from that point we'll just ramp so
that's what I expect by the start of next year we what will be one of the most significant players in our space and by next year we will provide more than half of the world's generative AI compute which is insane so I think that's incredible because of course if you have something that is as fast it's going to be absolutely