all right we have a lot of new stories to go over today let's get right into it we're starting with apple intelligence iOS version 18. 1 was just released to the general public and now includes Apple intelligence basically what they sold the iPhone 16 for with promoting Apple intelligence everywhere now comes a month plus later and let's just get it out of the way it is underwhelming I've been playing around with it in beta for a couple weeks now and I really haven't been able to find much value from it now there are a couple cool features which I will go over in a second but it is far from what Apple intelligence was promised to be looking at the Apple intelligence homepage on apple. com look at some of these features that they show what Apple promises having a small model on your phone that would have context on everything that you have on your phone contacts emails text messages apps pretty much everything and it doesn't have any of that look at this right here it's simply just searching across multiple apps it doesn't really do that here it's showing custom emojis and custom images basically text to image and I don't even have access to that yet so what does it have well it has the writing features that we're probably all pretty familiar with at this point basically rewriting or prompt to writing all stuff that we get in basically every other app that we use one of the most useful features that I found is notification rollups basically you get a ton of notifications and you have to scroll through all of them and it's probably not all that meaningful but what it'll do now is just give you a very brief summary of all the notifications within a stack and I actually found that very very useful the only other feature that I've attempted to use that has been somewhat useful is being able to describe an album in apple photos and it creates you an album but here's the thing Apple photos is not that good I tried to switch from Google photos to Apple photos and it just is missing core functionality that is required for me to switch over to it Siri isn't even all that much better you ask it a question it definitely can answer some things but then it offloads to chat GPT and that's fine but I also have access to chat gbt in like 30 other different ways really what I'm looking for is open ai's advanced voice mode built into the phone natively as Siri why can't I just have that it seems so easy and so obvious so definitely try it out it's worth it if you already have one of these newer phones so you need an iPhone 15 or 16 to use these features and I'm glad Siri is getting an update but this just isn't where I need it to be to be really impressed yet so let me know what you think have you played around with it drop some comments and let me know if you're impressed thanks to the sponsor of this video L trace the open source and open Telemetry based evaluation platform that helps you improve your llm powered applications length Trace helps developers collect and analyze traces create data sets and run evaluations to help you understand the performance and accuracy of your applications it offers end to-end observability and now offers native support for tracing DSP framework sessions it provides a custom dashboard with detailed views so you can trace DSP workflows from Chain of Thought to evaluation this allows developers to trace task tools and memory with Precision plus with prompt engineering versus debugging modes developers can switch between optimizing prompts and resolving issues streamlining llm performance so use l Trace to track your AI powered applications from end to end check out and star Lang trac's GitHub page for the latest updates and join the community of innovators and you can start using L Trace today with a 20% discount if you use the link in the description below lastly Join one of their future webinars to see how L Trace can take your llm apps from development to deployment thank you again to L trce for sponsoring this video now back to the video next GitHub had their annual event and they released a ton of updates with which really show the direction that it's going first developers now have a choice in GitHub co-pilot and if you don't remember just a couple years ago GitHub co-pilot was really the first time that we saw AI coding assistance and it really blew my mind completely you would just start typing a piece of code and it could finish it for you simply by hitting tab obviously we've come a long way since then but now GitHub is releasing some more flexibility so first GitHub if you don't know is owned by Microsoft so keep that in mind while I tell you about all of these updates GitHub is now allowing you to choose the model that powers the tab completion so as you can see here they're offering clad 3.
5 Sonet Gemini 1. 5 Pro and even 01 preview so diversifying away from open AI but yet still offering The Cutting Edge models from open AI now in my mind this continues the Microsoft strategy of diversifying away from being so dependent on open a I and I think that's a really smart move as a company you don't want to rely on a single partner for all of your basically core features for the foreseeable future so now you can figure out which model is best for whatever use case that you need here it says the cloud 3. 5 Sonet excels a coding task across the entire software development life cycle Google's Gemini 1.
5 Pro shows High capabilities and coding scenarios it has a built-in 2 million token context window which is enough to essentially fit the entire code base of many if not most code bases out there then of course we have the 01 models which are best at complicated coding tasks and now GitHub is also releasing a preview of their essentially prompt to code product called spark they describe it as an AI native tool to build applications entirely in natural language Sparks are fully functional micro apps that can integrate AI features and external data sources without requiring any management of cloud resources so they're definitely going after the cursors of the world with this launch so as really the first product to integrate AI so deeply and natively into itself I'm glad to see GitHub trying to keep up with the crazy pace of AI coding assistance next it is reported by the information that meta is developing their own search engine and I think it's a great call Google has had an absolute search Monopoly search dominance for the last two decades and meta with Facebook hasn't been able to break into it after multiple attempts but now is kind of a unique time Google is under Threat by companies like perplexity and now Meta Meta has the Llama models they already have hundreds of millions of monthly active users using their llama models meta Ai and all they really have to do now is just insert real-time search into it obviously it's a lot more complicated than that but from a strategic point of view it is that simple simply allow it to crawl the web and meta has been crawling the web for a long time time they've been reported recently to be increasing their efforts to crawl the web and we already know that they've allowed you to drop pixels for their advertising business all over the Internet for a long time so it's a really interesting time to look at Google Google is under severe threat they are not only being threatened to be broken up but also their core money-making machine basically the greatest business of all time Google search is under serious threat AI is changing everything and Google doesn't seem to be evolving their search fast enough for me I use perplexity and chat GPT about 95% of the time where I would have used Google search before I now only really use Google search if I know exactly where I want to go and I just don't know how to get there for example if I'm looking for a specific type of image or a website and I just forgot the URL that's when I use Google search but anything else I just want the answer I don't want 10 Blue Links and so I cannot wait to see what meta does in the search realm it is also reported that meta recently struck a deal with Reuters to deliver realtime news and updates through its meta AI platform I am incredibly bullish on meta AI in general but there's one thing that meta is really missing which is the hardware they have the meta AI Rayband glasses and that's great but I made a video a few weeks ago detailing why I don't think glasses are the final form factor of artificial intelligence even though Mark Zuckerberg does I just don't think that's it so we will see but you know what the more competition in search the better next grock finally gets Vision grock can Now understand images and funnily enough I thought it already did we saw a preview of this so long ago from Elon Musk only really just showing screenshots of what's possible through a blog post but now apparently it really does have Vision now one of the vision tests that I added to my rubric because I saw it from the grock preview is this basically explain this this meme here's startups on the left big companies on the right I loaded it into grock and now I say explain this Meme and now for the first time it can actually do it and here's the example it got it right it's great and so that's a cool update and I cannot wait for grock 3 though because that seems to be coming very soon next perplexity is being sued by The Wall Street Journal and the New York Post and this comes after nearly three dozen lawsuits have already been filed against generative AI companies this is a sticky situation because these generative AI companies are starting to be the front page of the internet whereas Google was previously the front page of the internet Google would scrape a website and show the results in its search results and then send it off to whatever that content publisher was but now rather perplexity and other AI search tools are simply regurgitating the information that was in those articles and the need to actually click through and go to the original content is basically no longer there these content companies are saying that the AI search companies are ingesting their results and presenting them without their permission and while that might be the case this business model is not going to go away AI search is here to stay and in their response to the lawsuits they even said they're proud to have launched its first of its kind Revenue sharing program with leading Publishers like time Fortune D spegel and others and it's interesting it feels very familiar where over the years every time a new technology has come out content creators have either taken one of two positions highly adoptive of this new technology or suing out of existence we saw that with MP3s we saw that with Google search we saw that with a bunch of different iterations of Technology every time I talk about copyright with AI I think you guys have a pretty different idea than I do on what it should be now that I'm a content creator the only thing I really want is the option to decide whether I want AI to ingest my content or not that's the only I want just the option what do you think let me know in the comments next in a massive update that I think went under the radar slightly Claude can now write and execute code this is basically like chat gpt's Advanced Data analysis feature and the ability for AI to be able to write code and then execute the code allows it to be much more accurate I'm going to give you a perfect example count the letters in the word strawberry now obviously a lot of these models have learned to do it but previously if they were just able to write python code that can take a string count the letters of R and then output the result it would have been able to get it a long time ago and so the ability for these models to be able to write code and then execute the code will allow them to accomplish many more use cases that were previously impossible based on the Transformers architecture next back to perplexity the native Mac OS desktop app is now out I installed it immediately I've been using it it's awesome I highly recommend you get it I absolutely love perplexity they do not pay me whatsoever this this is just something that I use day in and day out as I already mentioned chat chpt and perplexity are the two pieces of software that I use all the time for anything that I want to know next llama has released two quantized versions of their already small models so what does that actually mean quantized versions are basically like compressed versions of the model they're smaller and hopefully with that reduced size you can run it on many more types of machines now the tradeoff is a reduction in quality but a lot of quantization techniques now really don't have all that much loss in quality so they release quantized versions of llama 3. 2 1B and 3B that deliver up to 2 to four times increases in inference speed and on average 56% reduction in model size and 41% reduction in memory footprint these are on device models these are models meant to run on the edge they're open source highly efficient and I love it as I've said many times smaller models can likely accomplish the vast majority of use cases for the vast majority of people they perform very very well they're only getting better they're only getting smaller they're only getting more efficient and this is what I love to see I want to be able to run my own models on my devices without having to hit the cloud and there are many reasons that I want that privacy security low latency just owner ership I want the model in my hands so really glad to see this I haven't tested it myself yet but let me know if you have let me know how they perform for you next cerebrus who makes custom chips to run AI has improved their speed of inference significantly let's take a look cerebrus inference is now three times faster llama 3. 1 70b just broke 2100 tokens per second Crazy Fast 16 times faster than the fastest GPU solution eight times faster than gpus running llama 3B and I've just been super impressed by everything that cerebrus has done and just a few weeks ago they filed to go public so if you want to own a piece of it you might be able to pretty soon next in absolutely fantastic news especially for the US tsmc who makes computer chips their Arizona facility has outpaced production in Taiwan which is just insane to think about production yields in Arizona are now 4 percentage points higher than Taiwan one and that plant only began production earlier this year I love the investment in the US I love being able to build our chips here because that really deris us from having to depend on other countries for what is inevitably going to be probably the most important resource in the future next weo the company that offers autonomous vehicles it's in service right now in multiple cities they're doing 100,000 rides per week they are probably the furthest along in terms of actual rides has raised a ton of new funding today we're excited to announce that we've closed an over subscribed investment round of 5.
6 billion led by alphabet alphabet is where weo was created alphabet being Google with continued participation from andreon Fidelity Perry Creek Silver Lake tiger Global and tro price right now they're in service in San Francisco Los Angeles and Phoenix and are partnering with Uber they're just making a lot of great moves now here's the thing with weo they are very expensive to make the vehicles that is they use lar they use radar they use a suite of other sensors whereas Tesla took the exact opposite approach they're just using cameras and logically I think the camera only approach kind of makes more sense to me we only have eyes we interpret the world around us and we can drive cars so why can't we get neural networks to do the same I think we can just by using cameras now obviously using radar is probably going to be much better in the short term but I think in the long term Vision only AI is going to win now of course there's one downside if the vision of your camera is blocked in any way let's say there's moisture on the camera there's rain fog sand dirt anything basically you can't use them anymore so they really need to solve for that as well but congratulations to weo for the huge raise I have not actually used one and I cannot wait to do so next Kim AKA chubby has noticed that there's a new unknown model that beats all the other image models by a significant amount according to artificial analysis so it's called red panda and according to its Arena ELO it is beating all the other models greatly and by the way chubby has been writing incredible original articles for the Ford future newsletter for weeks now and if you haven't read them they are interesting they dive deep they're technical so please check it out forward future. so I haven't tested the red panda model myself but I do plan to and I can't wait to use it next apparently Google is going to release agents directly in the Chrome browser to do your browsing for you this is called Jarvis and it can do everything from doing research to booking a flight to purchasing products and it is consumer facing and meant to automate everyday tasks now we just saw Claude released the way for Claude to control your computer and now Google is doing the same but within your browser according to this article given a Command action Jarvis works by taking frequent screen shots of what's on their computer screen and interpreting the shots before taking actions like clicking on a button or typing into a text field it operates relatively slowly because the model needs to think for a few seconds before taking each action this is exactly the same way that we've seen previous open- Source computer control systems like open interpreter this is how CLA works and again I said it before it doesn't work all that well taking screenshots and trying to figure out what the coordinates are is just really difficult for AI to do and it just doesn't work all that well but of course I want to see it in action it wouldn't take all that much for Chrome because they own not only the browser but they also own the model to essentially open up an API that will tell you exactly where each pixel is and allow AI to much more effectively control the browser and last stable diffusion 3. 5 medium is now out this is an open-source text to image model and it is extremely good take a look at this now this new update is is specifically for 3.