hi everyone I'm Patrick from assembly Ai and welcome back to our final lesson of the Python 4 AI development course so far you have learned how to set up a development environment with python how to prepare data and then build your own models how to leverage model hubs and now we look at apis for AI development so using apis is for sure the simplest solution to access state-of-the-art models you simply need to use a few API calls so in this video we look at three cool apis that you can use we look at open AI to access large language models we look at assembly AI to work with audio data so for speech recognition and understanding your audio and lastly we look at replicate replicate is also pretty cool where you can do a lot of different things so there we look at a image generation example so let's get started first let's look at the open AI API and how we can access large language models with it so for this go to platform. openai. com and sign up because we need an API key for this and then we look at a chat example with jet GPT and also some text completion so text generation examples so for the chat example let's click on here and have a look at the docs and now we copy this code this is pretty much everything that we need and now let's go to your IDE in the first lesson I showed you how to set up vs code and so create a new project or you can use the project from the first lesson and in here create a new file and let's call this open AI demo Dot py and paste in the code so we import open AI this is a package that we have to install and then we call openai Dot chat completion dot create and we pass in the model name and then the messages we want to use for the chat and this will return a response so we can store this in a variable and for now let's simply print this and here we print response and now we can run this code but first we have to install this package and in the first lesson I showed you how to create and activate environments with conda so here I set conda create minus n and then I gave it the name AI demo and I specified the python version so do this if you haven't already and then we can activate this by saying conda activate AI demo and now here we install this but we don't say conda install here we say pip install and we say pip install open Ai and now we can use the open AI package so now we can say python openai demo dot py but I think if we run this this will fail because we haven't um yeah here it says authentication error so we need to set the API key first first and in all that to do this go to your account and then manage account or view API keys and here you find the different ones so you can generate a new one and then make sure to save this somewhere and because you cannot see this later again so I already did this and I start this in here so I will now copy and paste this of course I will and delete this after recording again so now and now in order to set this there are different ways to do this one way is to say open AI dot API key and then equals and now as a string we paste in the key so now this should work and now if we run this again then we should see a result and yeah so here we have the response now we no longer have the error and as you can see this is a dictionary and in here we have the choices field and then we have a list and in here we have the message and then we have the content and this is the response that we get back from our chat model so now to access this field we have to access the inner fields of the response and I think we can also find this in the documentation if we scroll down and have a look at the response format then it gives us this part so let's copy and paste this so now we also want to print the response choices and then the first choice and then the message and then the content so if I comment this out and run it again then we should only see the response measure message so yeah this is working so this is how we can work with the chat completion endpoint and now I want to show you a second endpoint so if we have a look at the documentation again then here on the left side you will find all the different guides that you can use for example text completion code completion chat completion this is what we just did then we can even do image generation fine tuning embeddings and some more so let's click on text completion and with this we can do all kinds of stuff so um let's scroll down and what's really cool here is that we can easily use the playground for example we can click on open this example in playground and now we can click on view code so first of all we can of course submit this and inspect the result but we can also click on view code and now we get the code snippet so let's copy and paste this and see how this looks like so here again we import open AI so we already did this and this time here they start the key in the as an environment variable so this is how you would access it from the environment because of course in a live application you don't want to write and hard code the the key like this so but we already set the set the key so we can also get rid of this and now the code looks very similar so we call open Ai and now we call the dot completion endpoint then again create and then again we can select the model then here we give it the prompt and now we can play around with different parameters and now if we print the response then again we should see a dictionary so let me actually remove the first um example again and now we only do this so now again let's run the file with python openai demo.
py and then again we see the result where we have a dictionary with choices and then the first choice and here we get the text so it created a Sweet Escape for the senses and the prompt was write a tagline for an ice cream shop so this worked as well and yeah this is pretty much all to get started with the open AI API and by the way the models so for this again you can have a look at the documentation and click on models and here you find all the different ones and some description and then the names how you can access them and then there's also more information about the parameters you can use so yeah I encourage you to play around with this API so now let's look at the next one now let's learn about assembly Ai and how we can use it to transcribe and understand speech so here we can submit audio and video files and then transcribe them so to get started go to assemblyai. com and by the way to play around with this for free you can select the playground and here you can paste in any YouTube link you want or you can upload local files so I'm uploading a local file that I prepared and later I show you how to do the same in the code then click on next and here you can select all the different AI models they have for example you can select summarization topic detection Auto chapters content moderation and a few more so for our example let's select transcription and summarization and then click on next and now it will take a few moments so here it says it takes roughly 15 to 30 percent of the file's duration so let's wait until this is done and we get the result so here we see the transcript and on the right side we also see the summarization so we have a very short summarization of the whole transcript we can also play the audio file if we want and have a look if this is correct so now let's learn how we can do this in the code so to work with the API go to assemblyai. com and then sign up you can do this for free or you can simply use the link in the description and after signing up you should see your dashboard and here you need to copy away your API keys so click on copy and now let's go back to our IDE and let's create a new file and let's call this assembly I demo Dot py and here let's create a variable that we call Api key equals and now as a string we paste in our key then let's go to the documentation so we can click on developers documentation and here we find walkthroughs for all the different steps that we can do so the first thing we want to do is submitting files for transcription or no sorry we actually want to start by uploading a local file for transcription so let's click on this and here you find a walkthrough with a pretty nice explanation and we can simply copy the code from the example we can also switch the code so I want to use Python of course so let's copy and paste this and paste this in here and then let's go over this together so this time we import requests and requests is a module to send API requests so here we don't use an SDK like the open AI package here we use a more General approach by sending the get and post requests ourselves and the first thing we do is to specify the file name and I prepared this here the audio dot MP3 so let's change this to audio dot MP3 and now here we have a helper function that will load the file in chunks then we have to define the headers and here we Define the API key so for this let's copy the API key above the headers and then simply use the API key here and then we send a a post request to the upload endpoint this one here and we also have to send the headers and the data and then we print response Json so we get a Json response back and now in order to run this we need to say pip install requests so this is also a third-party module and now we can run the file so we can say python assembly idemob.
py and this worked and you can see we get an upload URL back so now we have to submit this URL to the transcription endpoint so let's go back to that documentation and select submitting files for transcription and let's again copy and paste the code snippet so this time we Define an end point and the end point is this one here at the end we have slash transcript then we Define the Json payload and it has to contain the audio URL field and here we can paste in this URL that we just got back so I'm copy and pasting it in here of course if you want to do this in one step you can also say the URL equals and then here you access response Json and if you have a look the field is called upload URL so then as a string you say upload your L and now this will be the audio URL so we can also do it like this but since I already submitted the file I can actually comment this one here out and then only send the next request so here we again we Define the transcript endpoint then we Define the audio URL then we already have the headers and now we send a post request and this time we send the Json data and the headers and then again we print the response so let's save the file and then run again Python and then our script and here we get a long response spec but the only field for now that we need is the ID field so then again we could access it like um by saying the we call this transcript ID equals response Json and the ID field but since we already did this we can simply copy and paste this and then insert it in here and then let's comment this out and now the last step is to get the result so here we only submitted it for the transcription but now we need to check if the result is available so we can click on getting the transcription result and then again copy the code so let's copy this and go over this so This Time Again we Define an end point and the end point in the end has transcript slash and then the ID so this is where we want to put the transcript ID and for this we can use curly braces and then we use the transcript ID and then we have to use an F before the string so this is called an F string this basically combines the string with this variable and now again we have the header so we already defined them and now we send a get request to this new endpoint with only the headers and now we get a response and the response has a status field so here we find information about the response and in here you will find the status and it could be either acute processing completed or error so let's check if the response is if the status is completed so for this um let's say we can again access assign the specs so we say response equals response Json and then status status so let's assign this to a variable that is called status and then we say if the status equals equals completed and now we can print the response and then Json and then the field is the text field so if we have a look back then here you will find also the text so let's run this and save this and now let's see if this works and by now the transcript was already completed so now you see we get only the transcript assembly AI is a deep learning company that blah blah blah so this is working as well and of course um if this is not yet completed then what you can do you can do a while true Loop and try this over and over again so for this you can import the time package and this is built in so we don't have to install this so then down here we create a while true Loop and then we again and again send the um get request and then we can check so let's indent this correctly and now if we check if the status is completed then we break out of the while true so now it will be ended we can also do a l if and say um L if if the status equals equals an error we can also break and here we can for example print the error and otherwise if it is either um queued or processing then we wait a few moments so here we say for example time dot sleep and then let's wait 10 seconds and then it will do this again until the status is either completed or an error and this is all we have to do to transcribe audio files and by the way if you want to try out other models like the summarization for example you also find them in the docs so you can click on summarization and it's super simple to activate them so the only thing you have to do is in the second step um so here when we trigger the transcription at with the transcript endpoint in the payload that we send over together with the audio URL we also have to send some additional Fields like for this one we send summarization equals true and then we can also Define the summary model or the summary type and for example if we try sentiment analysis then the only thing we have to send over is sentiment analysis equals true so let's send this in here and this is everything you have to do to turn on other AI models as well so super simple and yeah this is how to work with assembly Ai and now now the last API I want to show you is replicate that you find on replicates. com with replicate you can run machine learning models in the cloud at scale super easily and here also users can upload their own models and they will be deployed for you so you only have to call them via the API and it makes it super simple it only takes a few lines of code so to get started again go to replicate. com and then you can sign up with your GitHub account and then you get an API token and then you can click on explore and then here you can find all the different models that are available there are also different categories for example we want to try out a diffusion model and then the popular stable diffusion model and then here you find some information you can also try this here and you can easily click on API and then you have the instructions how to run this from python so for this we need to pip install replicate so again let's go to our project and our um make sure to activate your environment so let's say pip install replicate and now let's create a new file so let's call this replicate demo dot p y and then let's continue and um the next thing they recommend is to export the replicate API token so you could do that could do it like this so you say export replicate API token equals and do this in your terminal I want to show you a different way how you can safely store tokens in Python and then load them so another way to do this is to create a DOT e and v file and here you create your token or your environment variable so this one is called replicate API token and then you say equals and now let's get this from our dashboard so here you can find and copy your token and now we can go back to the replicates demo and then we can say um first let's copy the example code so if we go back again to the example then this is all the codes that we need so we don't need to set the um token for replicate um like we did for open AI we only need to have this as environment variable and in order to load this now we use another package that is called python.
nf so we say pip install python minus dot nth and now we can say from Dot and import loads.