here's Logan Kilpatrick a lead at Google AI saying straight shot to ASI artificial super intelligence is looking more and more probable by the month this is what ilas saw so again he's the lead product for Google AI Studio working on the Gemini API and AGI so as you recall open AI co-founder ilas sover writ so he created his new startup SSI safe super intelligence quickly raised 1 billion now I think it's valued at over 5 billion probably more at this point it was just the latest update it was 5 billion so Logan here continues Ilia found SSI with the plan to do a straight shot to artificial super intelligence no intermediate products no intermediate model releases now this stirred up quite a few people it seemed strange like really there's nothing between where we are now and super intelligence there's just a straight shot there how's that possible AI rolls around only once subscribe this is from their website ssi. in we started the world's first straight shot SSI lab with one goal and one product a safe super intelligence so here Logan continues many people himself included saw this as unlikely to work since if you get the flyws spinning on models products you can build a real moat and certainly I think most people just kind of intuitively thought that yeah you need some sort of incremental approach an iterative approach you improve here there how's it a straight shot to artificial super intelligence if we're still debating if we have artificial general intelligence or not like we haven't even really crossed that bar I think we're really getting to AGI like we're right around AGI now but still it's kind of strange to think of artificial super intelligence but Logan continues and actually explains why he thinks this is the case he's saying however the success of scaling test time compute which Ilia likely saw early signs of is a good indication that this direct path to just continuing to scale up might actually work he's saying we are going to get to AGI but unlike the consensus from 4 years ago that it would be this inflection point moment in history it's just going to look a lot like a product release with many iterations and similar options in the Market within a short period of time which for what it's worth is likely the best outcome for Humanity so he's personally happy about this all right so let's unpack that a little bit what are they talking about we he talking about but keep in mind he's not the only one there's a number of people talking about this idea of super Intelligence coming sooner rather than later of course Ilia believes it's a straight shot there here's an article out of Forbes Sam Alton AI is integrated super intelligence is coming so this article in part talks about this article this blog post by Sam Alman called the intelligence age so when they talk about the September Manifesto in which he wrote about the future change that's the article they're talking about the blog post they're talking about so the question is how does Super intelligence emerge so Som I'm saying you have to look at the rate of scientific progress and how things will compound these advances over the next few years and this of course is he made that big statement that I think got a lot of people uh Curious so he's saying it is possible that we're going to have super intelligence in a few thousand days all right so that's Sam Alman saying it we also have elas sover believing it betting a lot of money on it and now somebody you know a lead from Google working on the Gemini API saying yep straight shot to Super intelligence from here let's see where this whole thing kind of originated okay so really fast this is Reuters July 15 2024 this is where we kind of find out about it that open the ey is working on uh a new reasoning technology under the code named Strawberry now this thing goes by a lot of different names so it's I know it's kind of hard to keep track of it all but you must have heard things like strawberry before that there was qar and now we're kind of seeing like the 01 model the 03 model they're all kind of the same thing they're part of like the the same line of AI models but before we just had you know rumors and speculations and and leaks and now we're beginning to see it kind of emerge and we're seeing it just completely obliterate a lot of these tests uh in mathematics and coding Etc that we thought were kind of you know maybe impossible for REI to do before certainly its success on the arc AGI test was very noteworthy got 8% when it was allowed to think about it at length that was not the official score because it took more compute than you could take under the rules of the archyi test but the point is you know this was what used to be rumors and I remember like posting about the leak of qstar and people calling me crazy in the comments saying like how could you believe this nonsense but it's here right a year and one month later it's true we we're seeing it we're we're we're messing around with the 01 model and we can see how good it is at you know reasoning this thing that we only heard rumors about last year now I've said this many times in this Channel and this is one of the reasons that we look a lot of the scientific papers that get published is because very often you see a paper that gets published and then 6 to 12 to 18 months down the road you see all the startups and all the kind of like the financially German companies explode of various products and applications from that paper so a lot of this progress is driven by you know the researchers the academics the universities and what's interesting is a lot of the stuff that we're seeing now can be attributed to in part at least to this Stanford paper that was released in 2022 now obviously there's a lot that went into it this paper just like one many contributions to this thing but this was kind of like the first clue that we see to what's about to happen right many well not that many years down the road a few years down the road all right so this method that was developed at Stanford in 2022 also keep in mind that the Google you know announcing the transformer in 2017 you know because a lot of these people publish their work is what allows everyone else to kind of build on top of it so open didn't invent all of these things they were just incredibly good at seeing kind of where everything was going at reading the published papers and kind of combining that into actual tangible progress in in the AI field all right so self-taught Reasoner so here's that paper May 2022 early on right star self-taught Reasoner bootstrapping reasoning with reasoning kind of a weird way of putting that right bootstrapping reasoning with reasoning right so notice Noah Goodman is uh one of the sort of authors on this paper it's Google research Stanford it's Etc and we've covered this in detail before but the big kind of Point here was this okay so one of the creators Professor Noah Goodman so this is kind of his quote this is what he was saying I'm not sure if it was at the time 2022 or when the whole thing about qar was coming out in 2023 but he's saying that star enables AI models to kind of bootstrap themselves into higher intelligence levels via iteratively creating their own training data so meaning synthetic data so these models create kind of their own reasoning their own training model and then they use that sort of synthetic data to improve themselves so whereas sort of like the AI 1. 0 it will get trained on human data it will get pretty good uh just on human training data when sort of pattern that we've been seeing is that it gets a whole lot better when it's allowed to kind of train itself we saw that with the alpha go model Google's uh chess AI all right so when an exclusively looked at human played games it got pretty good it got as good as the best human but when it was allowed to play billions of games against itself and keep improving over time it became much better than any human could be that's that's kind of the idea of self-play or creating synthetic data or creating its own training data right so and no Goodman continues in theory this could be used to get language models to transcend human level intelligence and just to you idea how this whole thing kind of worked and it's kind of summarized in this little picture I think pretty pretty well basically let's say you have like a very simple question right what can be used to carry small dog and you have a couple answers that you can choose from the model has to generate reasoning for how it arrives at the answer so for example the answer must be something that can be used to carry a small dog baskets are designed to hold things before the answerers basket B right so this is the answer generated by the model it's also its reasoning right so if it's reasoning leads to the correct answer where as you can see here produces a rationale and the answer this is this right that gets fed into the data set that gets fine tuned on right so version 1.
0 comes up with this rationale and answer and it does that a bunch of times all that data is fed into the model to training data to create version 2. 0 right and 2.