so I'm currently out in Boston for HubSpot inbound and well I'm kind of in a tiny hotel room with no desk so I'm improvising and using what I got because I want to share all the latest news with you in the world of AI right now we're sort of in the midst of conference season this week alone we had HubSpot inbound we had sales Force's dreamforce event Amazon had an event where they announced some new features YouTube held an event where they showed off some new features Snapchat held the event where they showed off some new features next week we got meta connect which I'm going to be flying out to that right after HubSpot inbound tons of conferences are going on which means tons of announcements are happening but I'm going to go ahead and jump in and start with an announcement that didn't come from one of these conferences let's start with talking about open AI so this week open AI put out this blog post here an update on our Safety and Security practices now this blog post talks about some of the new security measures they're implementing as well as being more transparent but probably the biggest news to come out of this specific article on the open AI blog here is this portion right here about establishing independent governance for Safety and Security now when it breaks down the board here in this message we can see the board is going to be chaired by ziko couter and include Adam D'Angelo Paul nakasone and Nicole Selman one name that is absent from this board is Sam ultman he is apparently stepping down from this Safety and Security committee and completely putting it in the hands of others now the biggest news that's been going on in the AI world as of recently was last week's announcement of open ai's 01 model it's a model that is much more effective at logic and reasoning and complex math problems and sort of stem subjects well this week open AI announced that they've increased the rate limits for plus and team users by seven times now admittedly I have not been home almost all month this month so I haven't really had the opportunity to Deep dive and play with open AI 01 myself I'm super excited to get back to my house after medic conect next week and really spend some time deep diving on this 01 model that way I can share with you some videos and tutorials and different ways that I'm using it and my own findings from playing around with it but honestly I just have not had the opportunity yet however I have been seeing some some really really cool stuff that people have been doing with open AI 01 for instance my internet buddy here Amar RI actually managed to make a 3D version of the snake game in under a minute so check this out when I pull this up to full screen here he managed to make this 3D version of snake that we're seeing right here on the screen in under a minute and it actually looks really really good so I mean with coding tasks my main go-to has been Claude Sonet 3. 5 5 lately I thought that's done a really good job with coding but I don't know man open AI 01 seems to possibly have it beat with coding so I can't wait to play around with it some more and since open AI 01 has come out pretty much all the tools that have integrated the previous versions of GPT 4 have started integrating this new model for instance if you're a GitHub co-pilot user this week you can now use open ai's 01 model directly inside of GitHub co-pilot if you're a fan of perplexity like I am our event here announced that they've added a new reasoning focus on perplexity for pro users so if you're one of the paying Pro members of perplexity you can now use the new open AI 01 Mini model directly inside of perplexity now it doesn't search the web yet and integrate that with your responses and you only have 10 uses a day as we can see right here with it but exciting to see that perplexity a model that I really enjoy using is adding all of the latest state-of-the-art models into their platform as well now there's been some weird stuff happening at open AI around this 01 model as well for instance people are claiming that open AI is threatening to ban them if they try to essentially jailbreak this new model according to this article here someone claimed that they received a warning email if they use the term reasoning Trace in conversation with 01 others have claimed that the warning is triggered simply by asking chat GPT about the model's reasoning at all here's a screenshot of an email that one person received hello we're reaching out to you as a user of open eyes chat GPT because some of the requests associated with your email have been flagged by our systems to be in violation of our policy against attempting to circumvent safeguards or safety mitigations in our services please halt this activity and ensure you are using chat GPT in accordance with our terms of use and usage policies additional violations may result in loss of access to GPT 40 Etc now if you have used this 01 model at all you'll notice that when you ask it a question it sort of shows you the reasoning in the logic that it's thinking through as it's thinking through it however it's showing you like a summarized version of that logic we're not actually seeing the complete Chain of Thought that it's going through it's just like little Snippets of the chain of thought and people are trying to really reverse engineer that and figure out what is this model actually sort of telling itself and going back and forth on open AI is still trying to keep that hidden I'm assuming it's because they're worried that other companies might sort of reverse engineer it and apparently if you do try to reverse engineer it you could potentially get banned from using open AI products here's another really weird one I've only seen one claim of this so I'm not sure if this is fake news or not it to totally well could be but one person on Reddit claims that chat GPT actually initiated a conversation with them before they ever sent a message we can actually see in this screenshot that they shared here chat PT shared the first message here how was your first week at high school did you settle in well this person Milo here responded did you just message me first chat GPT says yes I did I just wanted to check in and see how things went with your first week of high school if you'd rather initiate the conversation yourself just let me know they asked if it's a new update and chat GPT replied with yes it's part of an update that allows me to check in or follow up on things we've talked about like your first day of school I can still wait for you to start the conversation if you prefer again I've only seen this one claim of this happening I feel like if this was happening a lot we'd see it all over X and we'd see other YouTube videos talking about this right here this is the only one I've seen so I don't totally know how legitimate it is I also don't understand when this would happen would it happen like when you open up the chat GPT window and start a new chat and then the new chat just starts messaging you I I I don't really quite understand how this would happen in the first place which is sort of making my alarm Bells trigger that this could be fake moving on to Google there's been a handful of little Google AI Updates this week most of them actually on the YouTube side but they did announce that they're going to start flagging AI generated images directly in search later this year inside of search Google lens and the circle to search feature on Android if an image has metadata that states that the image was generated with AI it's going to let people know as soon as that image pops up now it has to have that metadata it can't actually tell what's Ai and what's not unless this metadata is actually sort of baked into the image moving on to some of the YouTube announcements there was a little YouTube event this week where they announced a bunch of new features rolling out I'm going to focus on the AI features including one that's coming out of Google Deep Mind Google's rolling out some new video generation features directly inside of YouTube shorts they're going to be integrating Google's Veo model directly into dream screen you can see here they have a little create button at the top of the screen inside of the app here when you click on create it asks you to describe what you want it gives you some sort of presets like vintage watercolor digital Etc and in this example they type cinematic underwater reveal of the Golden Gate bridge and then they click on Create and then what it does is it generates four different still images that it will use as the sort of starting frame of your video so you can see it generated these four Images here you select one that you like and then it does its thing and converts it into a video version of that image that you just generated here's another example of a dreamlike secret garden with Vivid colors Etc and here's a whole bunch of different examples of what it generated from that but that's not the only AI features they're rolling out for YouTubers they're actually adding a inspiration feature here which can help curate suggestions that you can mold into fully-fledged projects so they can type brainstorm video ideas new types of art and they click get ideas and then it lets them explore video ideas and then it helps them create the outline here and then it starts generating thumbnail ideas for that video for you so it really kind of looks like it's going to help across the board with ideation outlining thumbnail ideas the entire gamut of the sort of creative process of ideation for YouTube they also announced that they're bringing autod dubbing to YouTube which is really cool because I can record a video like this in English and it will automatically dub it and localize it to whoever's watching my video in whatever location they're watching it in that can open up so many more views for YouTubers because I record in English but I could potentially have a huge audience in India or Japan or Germany or one of these places and if you don't speak English you're probably not going to watch my videos but if this dubbing feature is in play now it will automatically dub any of my videos to whatever language that's really cool now they announced a whole bunch of other features for YouTubers as well but those are the main AI features there's like a new hype button they're changing the way YouTube videos look on Netflix things like that but for this channel I'm kind of focused on the AI stuff for right now moving on to large language Model news the company Alibaba out of China just released more than 100 new open-source models these come from the Quinn 2. 5 family of models ranging from a half a billion parameters all the way up to 72 billion parameters they aim to cater to a wide array of AI applications across various sectors including Automotive gaming and scientific research and they also unveiled a new text to video model as part of its tongle Wang image generation family I'm sure I butchered that now I haven't actually seen this textto video model yet but these large language models that they put out specifically the 72 billion parameter Quinn model is now apparently the best open- source model in the world according to bendu ready here she posted this screenshot with these various benchmarks and we can see llama 3.
1 405b over here mistol large 2 Quinn 272b and this new Quinn 2. 5 72b and in most of these benchmarks here it's outperforming the rest of these open source models all right I want to move on to AI video because there's been a lot that came out in the world of AI video this week as well starting with this one that actually came last week but I sort of missed it in last week's video cuz I typically record these on Thursdays release them on Fridays last week I recorded on Friday but had to release it on Saturday due to my schedule and this came out on Friday after I recorded that video and that's the fact that Runway dropped a new videoo video model now gen one was already sort of a videoo video model but this one is so much more improved over that model I found this Instagram post here from AI searches that shows off some examples of what it's capable of the top being the original video they uploaded the bottom being the one that uses the prompt running on Mars while wearing a space suit and you can see the original video is translated into this AI version of the video you just upload a real video of yourself give it a prompt and it can swap it into this AI version here's a Porsche in a warehouse but we can see here all of the different ways that this Porsche is now transformed into a new video in fact that initial video may even have been an AI generated video I'm not sure here's somebody walking on a treadmill and all of the different ways that's translated with big ears translated into these various AI videos all sorts of cool opportunities and things you can do again we were doing this with Gen one a while ago but this is so much better this looks so much cleaner speaking of Runway they did have some other news this week including the fact that they made a deal with Lionsgate in the first team up for an AI provider and a major movie studio now if you're not familiar with Lion's gate they're the company behind some really famous movies movies like John Wick The Hunger Games and the TV Network stars so they have a very large database of high quality films and TV shows and Runway is using what they can get from Lionsgate to build a custom AI video production and editing model it says here to create the custom model Runway will train upon lionsgate's library of more than 20,000 film and TV titles now this is fascinating especially considering some of the things that have recently changed California law around AI actors and AI video editing I'm going to get to that a little bit later in this video it's just really interesting the timing of this announcement with some of the recent bills that were just signed in California late last week Runway also announced that they're opening up their API so developers can have direct access so we're probably going to see a whole bunch of AI video tools coming out in the near future that just leverage runways API in the back end that probably try to claim to be a new video model but they're just leveraging the runway API if you're not familiar with an API it's basically a way for other software to access the abilities that something like Runway can do so other companies can go and build their own tools to generate videos and use runways technology on the back end now the runway API is still in Early Access you have to apply for Access if you're a developer and I guess they're slowly rolling it out to people on the exact same day that Runway announced their API Luma Labs dream machine also announced they're making their API publicly available so that companies can build with their API and their video generator one big difference about the Luma AI API is that well you can start right now so these like AI video Wars are heating up these companies are obviously competing with each other and trying to constantly one up each each other Runway says Hey we've got an API now get on the waiting list Luma says Hey we've got an API go use it now again I think all of this is just going to lead to a whole bunch of new video apps popping up running Future tools I see so many apps submitted I'd say only about 10% of the apps that are submitted actually make the website because so many of them are just you know gp4 but somebody put their own user interface over it or stable diffusion but somebody put their own user interface over they're the same things that already exist but somebody's just putting a new website in front of it and saying hey this is my new tool and they submit it to Future tools and I'm looking at it going this is just something that already exists these apis are really valuable if they're sort of built into something that's more of like a workflow where multiple steps happen but they're not super valuable if you're just going to go and create a super thin wrapper around the API essentially build a front-end user interface but what the tool does is exactly what Runway or Luma does already that's not Val valuable but I think we're going to see an explosion of that pretty soon the Chinese AI video model cing also got some updates this week they put out an announcement video here on X they released cing 1. 5 and probably the coolest feature of this new model is this new motion brush feature which I'll show you in just a sec here but they've also improved the image quality so it looks a little bit more realistic here improved Dynamic quality with improved motion rationalization such as eating and bird flights improved prompt relevance so it can now handle more complex prompts but like I mentioned the motion brush feature is probably the coolest feature that they've just added so for example they've got this image here of a moon and we can see they highlight the Moon by selecting it and then it says draw a path and they draw an arrow and we can see it generates a video of the Moon moving in the direction where they just drew that Arrow we've got this image of a cat here they highlight the cat and then they draw a path that kind of goes up and then back down again for the cat and then the video shows the cat leaping over the Bowl here here's some more examples I came across on this L scene X account here here's another one of a cat where they highlight the cat you can see as they sort of move the mouse around it highlights areas to help you select the right thing so they select the cat they make sure they grab the tail and everything and then they draw a little line showing the cat leaping onto the table and guess what we get a video of a cat cat leaping onto a table this isn't real time cuz I'm sort of scrolling but if I press play that's the speed that the video goes through right there here's another image of a soldier sitting in mud they highlight just the soldier here draw a little arrow to show what the soldier needs to do and this is the video we get out of it the soldier getting up and walking away now the soldier morphs a little but it's still pretty dang impressive and there's a whole bunch of other examples here on this x thread like this car going left or right on the fork another image of a shoulder jumping but I'll link this up in the description below so you can check out all of these various examples this is really exciting I'm hoping to see Luma dream machine and Runway ml bake some of these features in as well because this is really exciting also this week Amazon had a conference called the Amazon accelerate conference I believe and they announced a whole bunch of new stuff including their own video generator however this video generator is just for creating product ads for your Amazon products so if we take a peek here in the back end of Amazon under this new campaign feature they have the option to create a video ad here you select video ad and then it asks you which product you want to make the video around it then gives you a preview of four different potential video options that you can hover over and it makes a nice little product video showing off your product you can then edit the video with headlin and change the text on the screen the only thing is when you're using video you're kind of using video on Amazon to stand out if everybody has access to this feature does anybody really stand out are all these videos going to sort of look the same I don't know we we'll see how that plays out Amazon also debuted project Amilia which is an AI assistant for Sellers and we can see some screenshots here where they ask questions like what are the top things I need to do to prepare for the holiday season and it gives them suggestions based on their products and their store here's another screenshot where it is actually pulling in data from their Amazon store so it says how's my business doing and then it shows off some key metrics right inside of the chat for that user that seems like it'll be pretty handy if you're an Amazon Seller and you want to make your life a little bit easier with AI snap the company behind Snapchat held their annual snap partner Summit this week and they showed off a bunch of new AI stuff and cool new tech starting with a new well you guessed it AI video generation tool it says here the tool will allow select creators to generate AI videos from text prompts and soon from image prompts the tool will be available in beta on the web starting today this is from September 17th here for a small subset of creators and I guess during the keynote they didn't share too many details about this just that it's coming and it's rolling out in beta right now they also announced that Snapchat is going to be getting a Google Lens like feature we can see here in this screenshot somebody took a picture of a flower and then said at my AI what type of flower is this and then they get this response that's a heliconia Snapchat also showed off their new augmented reality glasses here's what they look like and apparently they have large language model they have a heads up display inside of the glasses Auto Demming lenses they have hand tracking similar to the Apple Vision Pro where you can navigate what you're seeing in your glasses with your fingers they sound pretty dang cool the biggest problem with them is well look at them that's what they look like we can see in this closer up image that it looks like the batteries and all the processing are sort of behind the ear here but it sounds like it'll be a fairly decent blend between what you get out of like the meta Rayband glasses and what you get out of something like the X real glasses where you can watch videos and things like that directly in the glasses now apparently this is just like a beta version of them they're not really ready for full roll out yet they only have a 45-minute battery life at the moment but ideally if they can improve that battery life and well make them look a little bit better these could be a hit while we're talking about glasses meta just extended their deal with the Rayban smart glasses through 2030 so it sounds like we're going to continue to get new versions and new models of the meta Ray bands for at least six more years and like I mentioned next week is meta connect I'll be at that event rumor has it they're going to show off a new augmented reality pair of sunglasses at that event they're not expected to be released until 2026 2027 but we might get a sneak peek of what else meta is working on in the world of AR and sort of smart glasses I found this to be pretty interesting as well the late great James Earl Jones the voice of Darth Vader actually gave permission to Lucas Films for them to continue continue to use his voice in future Star Wars films so we'll still get to hear James Earl Jones AI generated Darth Vader voice in future Star Wars movies now this article goes on to say that this has raised concerns among actors after last year's strike well California has been doing a lot about that in fact this week Gavin Nome signed eight new AI related laws he signed two laws that that criminalize deep fake nudes one making it illegal to create them one that forces social media companies to establish channels for users to report them another bill sb942 makes it required that there is watermarks inside the metad DAT of AI generated images AB 2655 requires online platforms like Facebook and X to remove or label AI deep fakes related to elections as well as create channels for people to report them AB 2839 is for social media users who post or repost AI deep fakes that can deceive voters about upcoming elections AB 2355 now requires AI generated political advertisements to disclose that they were AI generated and you know coming back to what I was talking about about James Earl Jones here ab262 require Studios to obtain permission from an actor before creating an AI generated replica of their voice or likeness and ab 1836 prohibit Studios from creating digital replicas of deceased performers without consent from their Estates which is interesting because in the Star Wars film Rogue one they actually created an AI generated version of one of the deceased actors supposedly without permission if they had made that video today it would have been illegal to do so there's one major Bill left that I've talked about in multiple videos we've done a whole podcast episode about it and that's the SB 1047 AI bill which basically holds the model creators responsible for major catastrophic harm that might come from the model even if they weren't involved in that catastrophic harm as of the recording of this video today Gavin Newsome has two weeks to decide whether or not he's going to sign that bill into law or veto it and it sounds like he's still kind of torn but he did speak in an event and he kind of almost implied that he might veto it he said there's one bill that that is sort of outsized in terms of public discourse and Consciousness it's the SB 1047 what are the demonstratable risks in Ai and what are the hypothetical risks I can't solve for anything what can we solve for and so that's the approach we're taking across the Spectrum on this he also said in that same conversation we've been working over the last couple years to come up with some rational regulation that supports risk-taking but not recklessness that's challenging now in this space particularly with SB 1047 because of the sort of outsized impact that legislation could have and the chilling effect particularly in the open source community so he does sound concerned about the impact it will have on open source possibly meaning that it'll get vetoed we still have to wait and see we'll know within the next two weeks for sure moving on to the event that I'm at right now HubSpot inbound HubSpot just launched their new Breeze platform with a ton of AI agents and features to help you sort of manage your CRM for you like the AI can basically run a lot of things for you this Breeze platform includes four Breeze agents to get work done fast from start to finish including content agent social media agent prospecting agent and customer agent plus 80 more features embedded across the platform I actually saw the keynote where they announced Breeze here at HubSpot I actually had the amazing opportunity to spend a little bit of time with the co-founder and CTO of HubSpot daresh where he actually showed me one-on-one some of the really cool features features that they're working on so I'm actually really excited this video is not sponsored by HubSpot but I just was really really excited to sit down with him and see behind the scenes some of these cool new AI features that these guys are rolling out over at HubSpot the chip startup grock just made a deal with one of the largest companies on the planet aramco they want to build the world's largest AI influencing center with 19,000 language processing units ramco is going to fund the development and it's expected to cost in the order of nine figures it blows me away though because it says the data center will be up and running by the end of this year and can later expand to include a total of 200,000 language processing units now this is a direct competition with Nvidia although this is more like Cloud compute I don't think you can just go and buy like a grock GPU and run your own AI at home you can go to the grock website or use the grock API and run the AI inference through their system systems and it's insanely fast while Nvidia is more selling the actual Hardware itself to the companies that are doing the AI both trying to accomplish the same things for the end users just two different approaches it'll be interesting to see which approach wins out in the long run if you're a user of slack slack AI will generate transcripts and notes from huddles so when you jump on slack meetings at the end of the meetings you can get key takeaways and summaries of the meetings and things like that there's been some frustration around LinkedIn apparently LinkedIn has been training on people's data and they don't make it super easy for you to opt out of LinkedIn training on the data it says here if you're on LinkedIn then you should know that the social network has without asking opted accounts into training generative AI models LinkedIn introduced a new privacy setting and opt out form before rolling out an updated privacy policy saying the data from the platform is being used to train AI models if you a fan of generating music with suo they just rolled out a new feature where you can exclude Styles so you can exclude specific instruments specific Styles or even specific vocal styles such as male or female vocals if you have an apple Vision Pro they just rolled out vision os2 for your Apple Vision Pro it's got a new feature where you can take a 2D image and have it sort of turn it into a 3D image inside of your Apple Vision Pro there's some new hand gestures that they added into it you can now rearrange icons on your home menu and handful of little quality of life updates so anybody that bought the Apple Vision Pro it might be worth you know pulling it off your shelf or pulling it out of that drawer playing around with it for another week before putting it back in the drawer and finally in the last bit of Apple news last week or a couple weeks ago whenever that was Apple announced their iOS 18 which didn't have apple intelligence but said that 18.
1 was coming soon with some apple intelligence well this week 18. 1 did come out with some of the Apple intelligence features now you do have to have an iPhone 15 Pro or better model to use these it says here users must manually enable the feature by going to settings Apple intelligence and Siri join the Apple intelligence weight list I don't know how quickly you get access to it once you get in on the wait list but apparently 18. 1 is rolling out now and people are starting to get access to it and that's what I got for you today I know it was a ton we're in conference season there's a lot happening right now I know this is a bit different of a video I am literally in like the world's smallest hotel room I don't have a desk I don't have anywhere to set my computer down I've literally been holding my computer in my hand this entire video I'm sitting on my bed in my hotel room I got this headset because I was sick of holding my mic up like this when I was in hotel rooms recording these videos so this one is totally different than normal hopefully you got the same amount of Val out of it I'm just trying to keep myself up with the latest AI news and if I'm keeping myself up with it might as well flip on the camera and help you keep up with it as well so I hope I achiev that for you if you like videos like this make sure you like this one with the little thumbs up like button and subscribe to this channel that will really really help me out and it will ensure that more videos like this show up in your YouTube feed I make a lot of AI News videos a lot of AI tools videos a lot of tutorials things like that that I try to keep you tapped into the AI world and also show you how to use all this stuff now typically in these News videos I actually like to demo a lot of the tools and show them off and actually use them myself that's really really hard when I don't have a desk and I'm working on a laptop from my bed in my hotel room but I promise once I get back home after meta connect in a couple weeks it's going to be back to regular scheduled programming with a lot more tutorials and Tool demos and what you're used to from my channel I'm just almost done with conf season I'm having a lot of fun doing it but I'm having a harder time making as many videos as I'd like to but there's a lot more coming and I can't wait to test out more of these tools and show off what I learn and what I find out playing with them all so again I would really really appreciate that like And subscribe to this Channel and if you want to know about all the latest and coolest AI tools and the latest AI news check out futur tools.
that's the site where I curate all of it the tools the news I even have a newsletter that's totally free where I'll update you in your inbox with just the most important news and just the coolest tools totally free all available all over at futur tools.