A single spark may seem insignificant, but when it finds the right material, a flame is born capable of illuminating everything. Just three years old, Confio developed an open-source system to control every step of AI-powered visual content creation. However , it had two drawbacks: its difficulty to install and how to use it effectively. But now, with a recent update, not only have these issues been resolved, but it's also capable of achieving results impossible for other tools. To demonstrate this, we'll be testing it across six modules with 14 use cases. With it, you can generate images, edit
existing ones, upload high-quality images, create advanced videos without limitations, generate audio, create 3D models, and even complete systems for content creation. Best of all, since this platform is open source, we can use it for free, without limits or restrictions. Sounds good? Let's explore this tool. ConfiUI is a platform launched on GitHub by a user named ConfiUI Anonyus in 2023. It allows users to create images, videos, audio, and 3D models with AI using visual, node-based flows. It was so well-received that it has raised over $17 million to date, and its backers are working to simplify the
platform's use to make it more accessible. With features like apps and others, we can now easily create any type of content. To find out how, let's jump to ConfiUI, the platform I'll link to below in the description. Once on the site, as they explain , it's the most powerful generative guide tool and it's open source. From here, as you can see, we have two options. First, we can download it for local installation. We'll cover this later. Second, they 've recently launched their own cloud service so we can use all these models without installing anything .
Keep in mind that both methods are free to use, but as expected, the cloud mode is unlimited since we're actually using their servers. However, you can also install it locally if you prefer, allowing you to use it without any limits or restrictions. So, going back to the platform, if this is your first time using it, it's best to try the cloud version first. Looking at the pricing plans, you'll see that the free plan offers quite a lot, allowing you to create around 35 videos per month and providing free use of an RTX 6000 Pro graphics
card with a 96GB GPU. This card, if you research it, would normally cost several thousand euros. So, going back to the beginning, I'm going to start using the cloud method. Once we're here, we only need to take two steps. First, click this button, and second, log in with Google or GitHub. I just did it with Google, and seconds later, we already had access to this entire tool. Now that we have this tool, let's start with the first use case, where we'll see how we can generate images using artificial intelligence. Going back to the application, keep
in mind that as soon as you land, you'll see this new templates feature. If you close this, you can return simply by clicking on this templates icon, and from here, we'll begin with the AI image generation section. When you do this, you'll find a bunch of AI models that work with the entire image section. Not only can you generate images from text, but we also have tools to edit existing images, change the style of another assistant we upload, edit them, and more. So, let's start from the beginning, and to do that, we'll start by testing
this first model from the Zii family. To use it, I would simply click on the function I want to use, and when I do, these nodes will load, from where I... Here I can add any prompt. Now, if I were to change this from here to, for example, an old library with floating books and magical light, with a fantasy image, but in a realistic and detailed style, and then, keeping everything exactly the same, I clicked "run," it would tell me right here that it had just queued this task and would start processing it. Notice that
the speed is quite fast, and in fact, without any cuts, we already have the generated image here, which, as you can see, is quite good, specifically with what I requested. And now, on this image, I can click on this download option to save it to my device. Also, from the preview of each image you create in this tool, you'll find this icon here, which, if you click it, will take you to a small interface where you can also edit. Let's also see how we can edit images using artificial intelligence from this platform. And so, going
back to where we left off, if we click here on " close," notice that if we click here in the upper left and then click on the "new" section, we'll enter another empty flow. We could configure this manually , although since it's a bit more technical, let me know in the comments if you'd like to see a tutorial with a bit more depth on these advanced functions. But to keep things simple, let's go back to the templates section. And notice that if we go back to the images section, we'll also have different templates for editing
images. Here, among others, we can find some like Nano Banana 2, but keep in mind that you can use so many templates here that you can install them locally. You can use those locally without limits, but those that are via API will actually have their own credits, since they have a cost on those platforms. So, one of the ways I recommend if you want to optimize cars, especially when installing this tool we'll be looking at later, is to go directly from here where it says "runs on" and select "conf UI," which is the equivalent of
not using API keys. With this, we'll find many other models, but now they won't be using any API keys. Let's test it, for example, using this Quen Image Edit model. Click here, and we'll see a flow that seems a bit complicated, but it's really as simple as just modifying these two nodes, which are the inputs. This one is the image node, where we have a cat as an example, and below, we have the prompt section for which edit we want it to perform. In this case, we'll replace the cat with a Dalmatian. If we now
click on "RAN" again, that task will be activated here, and as you can see, it's executed very quickly. In fact, before I finish the sentence, we'll already have the Dalmatian perfectly rendered in image format, having replaced the original cat image. And from here, if we wanted to modify the examples, we could do so. To do this, I'm going to click on the X to delete the image. I'm going to select the folder to choose one from my device. And from here, I'm going to select this last image we made in the previous case. I load
it here, change the prompt to, for example, add a frog dressed as a librarian to the image, and if now, with these changes, I click on RAM, it will start processing it again, and in about 10 seconds, we'll have this result here, which it did perfectly on the first try, even taking into account details of backlighting and shadows. Let's now look at other more specific models, such as creating multiple angles on an image. To do this, keep in mind that beyond entering the image category , you can also filter using the search function. To do
this, I recommend that you click on all the templates and from there type the keyword for the type of model you are looking for. I, For example, I'm going to type the word "multi," since I'm looking for a model that helps me generate multiple images of the same character. To see how this works, we'll click on it, and from here, you'll see we have a bunch of outputs—specifically, eight images that it will create based on the photograph we're going to combine. Here again, we can see an example, although in this case, I'm going to click
on the folder and select the pose it was doing for the thumbnail of this video. So, once I have it loaded, I'm simply going to repeat the process and click on "run." And while this is running, notice that different prompts appear here, which are the images it will generate, like one from close up, another from above, from the sides, and so on. In fact, as I'm telling you this, I can already see it generating some images here, like one from the front, which wasn't the original image, and another from a bit further away. Here we
can see one from the side, another taken vertically, and here we can see another from the other side, a perspective from the center and slightly to one side, from below and directly from above, like this image we see here. With this, we're no longer talking about using templates to generate images or make simple edits, but rather having specialized templates to create any type of content, and even to automate different results on a massive scale. Similar to these content generation workflows, we could also focus on increasing sales. Imagine that every message on Instagram, WhatsApp, TikTok, or
appointment booking for your business could be answered automatically, even while you're sleeping. Well, this is possible, and that's why I want to introduce you to Chatfuel, a platform that will respond to your customers 24/7 thanks to its artificial intelligence tools. If someone asks about one of your services on Instagram or contacts you via WhatsApp, if you don't respond quickly, you might lose that potential sale. And this is where Chatfuel can help you. And doing this is now easier than ever. They've just launched Corker AI, an assistant that will build whatever you need for you. Look,
if we jump from the platform, we'll now have a chat available on the left side so we can talk. From here, we could ask it questions, like, for example, what it's capable of doing. We simply send it the command, and it won't respond at the speed you see here. But the best part is that, besides answering questions, it can also configure all the automation workflows. To do this, I could tell it from here, and we'll put it to the test. Hey, look, I'd like you to tell the clients that we've just launched two courses: a
Make automation and we've also just opened the fifth session of the intensive artificial intelligence course. I'll stop this, transcribe it immediately, and the artificial intelligence will automatically do all this configuration for us. In fact, it gives us access to modify the knowledge base for this purpose. So, I'm going to accept it. And from here, I can confirm that he's done all this setup for me. Thanks to Chat Fuel sponsoring this video, they've given me a special link, which I'll leave in the description below. If you use Alej's code, you can even use it for a
free month, and in just 2 minutes, you'll have a whole system up and running for your business. Now that we know this, let's move on to the next use case. We're going to finish the last image example by seeing how we can even improve the quality of images we already have. To do this, in the template section, we could search for Appscale, but if you'd like to search directly in your language, keep in mind that in the settings section, you can select your language, in this case, Spanish. Then, not only will we have the entire
platform in our language, but we'll also be able to search for keywords. A faster way. For example, here I just typed "improve" and I found different models that improve quality, not only with images, but also with videos through other artificial intelligence models. To test this and finish the image section, I'm going to select this Zi module again and we'll see a new interface that Confi UI just released called "Applications," which simplifies everything so we can share our input and actually get the result without having to see so many nodes and technical configurations, since in many
cases it's not even necessary to see it. And for this example, we're going to work on this image, which, as you can see, while not bad, has a lot of room for improvement in terms of quality. So, I'm going to upload this image, click "run," and it will start processing. In fact, about 12 seconds later, we'll have this result. And if we zoom in a bit, notice how we can really see a lot of detail, not only in the plane but also in the fire. If we compare it to the original image, we can really
see quite a difference. And with this, we're not just talking about creating new images or editing them, but we could actually recover existing ones or improve their quality. Starting a new section with the fifth use case, we'll see how we can also convert text into AI-generated videos . To do this, and going back to Confi UI, all we have to do is click on this video section, and from here we'll find a bunch of templates. And starting with the text-to-video part, I'm also going to mention this filter here, which I find very interesting, since we
can filter by the most popular templates, which are probably the ones that work best, but we can also see the most recent templates that have just been released, such as LTX 2.3, which was released this week. And to use it, all we had to do was click on the model and select an image. In this case, we'll leave it as is, with this screenshot of Captu driving this car. We could also upload audio that we want to appear in the video, even record it from here, and from the "prom" section, we can specify what we
want to happen in the video. I'm going to leave this as it is in the example. So, with this configuration, I'll click "run" again, and while this is working, keep in mind that from here we're only creating one piece of content. But if we change this number to, for example, four, we could generate four videos at once. And that's it. After exactly 71 seconds—you can actually click on the "active" section to see how long each step took— we'll have our generated video here. But since I can also download it from here, I'll leave it here
for you to see better. 2.3. And obviously, we can also make small variations to all these nodes, especially at the logical level. Similarly, if we wanted to create videos without using specific audio, notice how, from the templates section, even though the one we just used uses both images and audio, we could also select another template that only uses images, like this one that only needs images, or other templates that can also start with just text. So, moving on to the next use case, if I select this image-to-video option, by selecting any image, like this one
here, and adding a prompt here, where I just wrote that the woman runs towards a spaceship and flies to another planet, without changing anything else, if I click "run," then after just about 30 seconds we would have this video here, which I've included here for you to see better. Finally, with the last use case in the video section, we'll see how we can convert videos, allowing us to change the style or even upload higher-quality videos. To do this, I've gone back to the templates section and I'm going to choose this first example where we can
convert a realistic-style video into one that looks like an inflatable doll. To see how this would look, I'm going to click on it. It will load all the nodes that make this possible. But even though you see many nodes, notice that in the input section we only need to modify the video we want to change. In this case, I'm going to leave this one here as an example. So now all we have to do is click on run, and it will go through all these processes until we get the final result. I've already generated it,
but before we see it, notice how in the previous steps I used the Queen Image model to convert that person into this inflatable doll so that we can then see this result here, where we have exactly the same movements of the person, only modified as if it were an inflatable doll. And with this, the possibilities are very broad. We can now not only create videos from text, audio, or images, but we can also repurpose existing videos, whether by using different styles, modifying the characters, or even increasing their quality. The results are so good that I'll
show you a second example with more aggressive movements. I'll leave it here. In the next use case, we'll see how we can generate AI-powered voices to narrate what we need. To do this, click on the audio section, and from there you'll find different models. Keep in mind that if you were filtering by Config and UI, and you also want to use those with API keys like Level Laps, simply uncheck them, and you'll see all the available models. However, in this case, I'm going to keep the ones that can be run locally and select this text-to-speech
model right here. And notice that you'll always find some brief instructions here on how to use it, although it's actually very simple. We simply had to upload an audio file of the voice we want to clone. In this case, we would have a voice like this one here, and we're going to leave this voice as is, although we could actually upload our own. Now, we can also change the text that appears in the prompt section, and for that, I've only added "de alejabi." I'm going to leave the rest exactly the same. So now I click
on run, and literally 20 seconds later, we can see this result here. Hello friends of Alex Javi, welcome back to Confui. Today we're going to Explorer Chatterbox. Notice that we have an English-speaking voice here, and that's why it's also important to look at some parameters that appear in the nodes when the result isn't what we expected. In this section, notice that we have a language selected in English, which is why it sounds like it has that English accent. So, if I change the language to Spanish here and run it again, we'll have this other audio
file here. Hello friends of Alej, welcome back to Confui. Today we're going to explore Chatterbox, a voice cloning workflow capable of handling multiple voices and multiple languages. With this, we'll not only be able to create a wide variety of content formats, but we'll also have complete control over every parameter for the final result. Let's finish with the last use case in the audio section: from here, beyond generating voices, we can even generate songs. Notice that within the audio section, in addition to simply having the audio itself, we also have another section that adds music. So,
if I were to click on this section and, in the prompt section, specify the style of music I want to create—in this case, rock—and the lyrics I want to be sung in this song, and then click "run" again, in just 15 seconds we would have this 2-minute song. Let's listen to a bit. Draw the now, watch me break free. And look, it hasn't even reached 20 seconds and we have a completely customized result, and really, for the first try, it's not bad at all. Keep in mind that here again, regarding the settings, we could also
change some aspects, such as even the duration of the song we want to create, and basically any adjustment we might need. Continuing with use case number 10, we'll be seeing how we can also create 3D models. To do this, just like the other formats, we have a category of 3D models here. And from here, to try one via API, I'm going to switch here to see all the models. Let's select, for example, this one here from Junuan. And notice that from here we can also preview what the entire 3D model would look like. To see
that, all we have to do is upload a couple of images, one of the front and one of the back. So now, if we click "run, " a little later, and on the first try, we might find ourselves with this 3D model, which, as you can see, looks quite good with all sorts of details. And ultimately, we can not only view it from here, but we can also make different edits, such as changing the background color, uploading a background image, or even exporting it directly by clicking on the three lines. Similarly, with 3D models, beyond
converting images to this type of model, you can even make a 3D model be divided into different parts, especially useful at a professional level, to break everything down automatically. And like this, there are other features within this field, such as increasing the quality, as well as other models. Let's continue exploring the categories by nodes, as they really show us the amount of things we can do. To that end, I want to highlight some filters. And you'll notice that within the template section, we have a "Popular" section where you can see which AI models are being
used most, whether for generating videos, editing images, adding changes to the original image, or simply using different models according to specific use cases. Speaking of use cases, I want to show you this next category, which displays very specific workflows, such as the one we saw earlier: creating different angles of a person we upload, generating different personal photos from one we already have, changing the clothes of a person we upload or an avatar that doesn't even exist yet but can wear clothes we might be selling, or even things as specific as gesturing so that an avatar,
even if it's a different style, follows the same gestures, managing lighting changes at a professional level in an existing image, and other uses that you'll find in this tab . To finish up with this case, note that we also have a partner node section, where we have the tools that Bappi uses. Keep in mind that here, processing would be done through the servers of the company we select. For example, here in Google, if we select Google, we'll be using Google's servers . So, if we really want to run everything locally to keep it free, it
wouldn't be very useful. However, at a professional level , we might want to have everything centralized and not mind paying for specific content. Therefore, it's also important to know that you can access these tools from here, and in fact, most of them don't even use the nodes. We have an application feature that greatly simplifies the use of each of these tools. Regarding templates, let's finish with the community. Confi UI has just launched a new tab called Confi Hub, where we can see templates created by other users so we can reuse them all. These are workflows
that have already been created. Here we'll be able to see both official workflows from the platform and those created by users. If we click on one, we can choose to try it in the cloud or even download it locally to avoid any limitations. Returning to the cloud platform, keep in mind that I've been using all these examples in the free version, and I still have over 300 of the 400 free credits remaining. But if you don't want any limitations or censorship, we need to look at the next use case. The Confi platform offers a multitude
of models and templates, as we've seen in previous use cases, but we can also download models that are completely uncensored. To do this, we need to switch to CBIT AI, the platform we discussed in this video where we created uncensored images. And one of the interesting things about this is that, beyond being able to create content directly from here, we can also click on any Olora model and, once inside, download it. This way, we can have it locally, drag it to the desktop version of Confi UI, and use it without any restrictions. But to finish
up, let's see how we can install this, because the steps have also been simplified. To conclude with the last use case, let's see how we can install it locally. Returning to the platform, instead of using the cloud, we'll click on "Download Confi UI." From here, keep in mind that we'll have three options. The previous one, which requires some technical steps to install, or now an installer that has simplified everything, with one for Windows that requires either an Nvidia or AMD graphics card. The more powerful the graphics card, the better, because that's what AI models need.
And then we also have the Mac option, which is optimized for M-series processors. So now all we have to do is click on download. In the case of Mac, we would drag it to the Applications folder, and when we open it, we'll see how the installation would be done without us having to do anything, with a very similar interface. First, it will tell us the templates we would need to run the different workflows we want to use, but we can close this. And notice that if I click on the templates section, we'll have exactly the
same options, only now if I click on any workflow, like the first one we were using, image generation, it will tell me what I need to download locally so that it runs without needing an internet connection or any other restrictions. Here, in this case, I would need to download this 7 GB file, this other 300 MB file, and finally the DZ AI image template, which takes up about 11.5 GB. So, it would be as easy as downloading these three templates , and that way we'd have it enabled to run locally without any limitations. Confi UI
would charge us for running it in the cloud, but if we run it locally, we wouldn't have any restrictions. The truth is, we now have a lot of flexibility to create any kind of content. The imagination is there; we just need to turn it into reality. A single spark may seem insignificant, but when it finds the right material, a flame is born capable of illuminating everything. Just three years old, Confio developed an open-source system to control every step of AI-powered visual content creation. However , it had two drawbacks: its difficulty to install and how to
use it effectively. But now, with a recent update, not only have these issues been resolved, but it's also capable of achieving results impossible for other tools. To demonstrate this, we'll be testing it across six modules with 14 use cases. With it, you can generate images, edit existing ones, upload high-quality images, create advanced videos without limitations, generate audio, create 3D models, and even complete systems for content creation. Best of all, since this platform is open source, we can use it for free, without limits or restrictions. Sounds good? Let's explore this tool. ConfiUI is a platform launched
on GitHub by a user named ConfiUI Anonyus in 2023. It allows users to create images, videos, audio, and 3D models with AI using visual, node-based flows. It was so well-received that it has raised over $17 million to date, and its backers are working to simplify the platform's use to make it more accessible. With features like apps and others, we can now easily create any type of content. To find out how, let's jump to ConfiUI, the platform I'll link to below in the description. Once on the site, as they explain , it's the most powerful generative
guide tool and it's open source. From here, as you can see, we have two options. First, we can download it for local installation. We'll cover this later. Second, they 've recently launched their own cloud service so we can use all these models without installing anything . Keep in mind that both methods are free to use, but as expected, the cloud mode is unlimited since we're actually using their servers. However, you can also install it locally if you prefer, allowing you to use it without any limits or restrictions. So, going back to the platform, if this
is your first time using it, it's best to try the cloud version first. Looking at the pricing plans, you'll see that the free plan offers quite a lot, allowing you to create around 35 videos per month and providing free use of an RTX 6000 Pro graphics card with a 96GB GPU. This card, if you research it, would normally cost several thousand euros. So, going back to the beginning, I'm going to start using the cloud method. Once we're here, we only need to take two steps. First, click this button, and second, log in with Google or
GitHub. I just did it with Google, and seconds later, we already had access to this entire tool. Now that we have this tool, let's start with the first use case, where we'll see how we can generate images using artificial intelligence. Going back to the application, keep in mind that as soon as you land, you'll see this new templates feature. If you close this, you can return simply by clicking on this templates icon, and from here, we'll begin with the AI image generation section. When you do this, you'll find a bunch of AI models that work
with the entire image section. Not only can you generate images from text, but we also have tools to edit existing images, change the style of another assistant we upload, edit them, and more. So, let's start from the beginning, and to do that, we'll start by testing this first model from the Zii family. To use it, I would simply click on the function I want to use, and when I do, these nodes will load, from where I... Here I can add any prompt. Now, if I were to change this from here to, for example, an old
library with floating books and magical light, with a fantasy image, but in a realistic and detailed style, and then, keeping everything exactly the same, I clicked "run," it would tell me right here that it had just queued this task and would start processing it. Notice that the speed is quite fast, and in fact, without any cuts, we already have the generated image here, which, as you can see, is quite good, specifically with what I requested. And now, on this image, I can click on this download option to save it to my device. Also, from the
preview of each image you create in this tool, you'll find this icon here, which, if you click it, will take you to a small interface where you can also edit. Let's also see how we can edit images using artificial intelligence from this platform. And so, going back to where we left off, if we click here on " close," notice that if we click here in the upper left and then click on the "new" section, we'll enter another empty flow. We could configure this manually , although since it's a bit more technical, let me know in
the comments if you'd like to see a tutorial with a bit more depth on these advanced functions. But to keep things simple, let's go back to the templates section. And notice that if we go back to the images section, we'll also have different templates for editing images. Here, among others, we can find some like Nano Banana 2, but keep in mind that you can use so many templates here that you can install them locally. You can use those locally without limits, but those that are via API will actually have their own credits, since they have
a cost on those platforms. So, one of the ways I recommend if you want to optimize cars, especially when installing this tool we'll be looking at later, is to go directly from here where it says "runs on" and select "conf UI," which is the equivalent of not using API keys. With this, we'll find many other models, but now they won't be using any API keys. Let's test it, for example, using this Quen Image Edit model. Click here, and we'll see a flow that seems a bit complicated, but it's really as simple as just modifying these
two nodes, which are the inputs. This one is the image node, where we have a cat as an example, and below, we have the prompt section for which edit we want it to perform. In this case, we'll replace the cat with a Dalmatian. If we now click on "RAN" again, that task will be activated here, and as you can see, it's executed very quickly. In fact, before I finish the sentence, we'll already have the Dalmatian perfectly rendered in image format, having replaced the original cat image. And from here, if we wanted to modify the examples,
we could do so. To do this, I'm going to click on the X to delete the image. I'm going to select the folder to choose one from my device. And from here, I'm going to select this last image we made in the previous case. I load it here, change the prompt to, for example, add a frog dressed as a librarian to the image, and if now, with these changes, I click on RAM, it will start processing it again, and in about 10 seconds, we'll have this result here, which it did perfectly on the first try,
even taking into account details of backlighting and shadows. Let's now look at other more specific models, such as creating multiple angles on an image. To do this, keep in mind that beyond entering the image category , you can also filter using the search function. To do this, I recommend that you click on all the templates and from there type the keyword for the type of model you are looking for. I, For example, I'm going to type the word "multi," since I'm looking for a model that helps me generate multiple images of the same character. To
see how this works, we'll click on it, and from here, you'll see we have a bunch of outputs—specifically, eight images that it will create based on the photograph we're going to combine. Here again, we can see an example, although in this case, I'm going to click on the folder and select the pose it was doing for the thumbnail of this video. So, once I have it loaded, I'm simply going to repeat the process and click on "run." And while this is running, notice that different prompts appear here, which are the images it will generate, like
one from close up, another from above, from the sides, and so on. In fact, as I'm telling you this, I can already see it generating some images here, like one from the front, which wasn't the original image, and another from a bit further away. Here we can see one from the side, another taken vertically, and here we can see another from the other side, a perspective from the center and slightly to one side, from below and directly from above, like this image we see here. With this, we're no longer talking about using templates to generate
images or make simple edits, but rather having specialized templates to create any type of content, and even to automate different results on a massive scale. Similar to these content generation workflows, we could also focus on increasing sales. Imagine that every message on Instagram, WhatsApp, TikTok, or appointment booking for your business could be answered automatically, even while you're sleeping. Well, this is possible, and that's why I want to introduce you to Chatfuel, a platform that will respond to your customers 24/7 thanks to its artificial intelligence tools. If someone asks about one of your services on Instagram
or contacts you via WhatsApp, if you don't respond quickly, you might lose that potential sale. And this is where Chatfuel can help you. And doing this is now easier than ever. They've just launched Corker AI, an assistant that will build whatever you need for you. Look, if we jump from the platform, we'll now have a chat available on the left side so we can talk. From here, we could ask it questions, like, for example, what it's capable of doing. We simply send it the command, and it won't respond at the speed you see here. But
the best part is that, besides answering questions, it can also configure all the automation workflows. To do this, I could tell it from here, and we'll put it to the test. Hey, look, I'd like you to tell the clients that we've just launched two courses: a Make automation and we've also just opened the fifth session of the intensive artificial intelligence course. I'll stop this, transcribe it immediately, and the artificial intelligence will automatically do all this configuration for us. In fact, it gives us access to modify the knowledge base for this purpose. So, I'm going to
accept it. And from here, I can confirm that he's done all this setup for me. Thanks to Chat Fuel sponsoring this video, they've given me a special link, which I'll leave in the description below. If you use Alej's code, you can even use it for a free month, and in just 2 minutes, you'll have a whole system up and running for your business. Now that we know this, let's move on to the next use case. We're going to finish the last image example by seeing how we can even improve the quality of images we already
have. To do this, in the template section, we could search for Appscale, but if you'd like to search directly in your language, keep in mind that in the settings section, you can select your language, in this case, Spanish. Then, not only will we have the entire platform in our language, but we'll also be able to search for keywords. A faster way. For example, here I just typed "improve" and I found different models that improve quality, not only with images, but also with videos through other artificial intelligence models. To test this and finish the image section,
I'm going to select this Zi module again and we'll see a new interface that Confi UI just released called "Applications," which simplifies everything so we can share our input and actually get the result without having to see so many nodes and technical configurations, since in many cases it's not even necessary to see it. And for this example, we're going to work on this image, which, as you can see, while not bad, has a lot of room for improvement in terms of quality. So, I'm going to upload this image, click "run," and it will start processing.
In fact, about 12 seconds later, we'll have this result. And if we zoom in a bit, notice how we can really see a lot of detail, not only in the plane but also in the fire. If we compare it to the original image, we can really see quite a difference. And with this, we're not just talking about creating new images or editing them, but we could actually recover existing ones or improve their quality. Starting a new section with the fifth use case, we'll see how we can also convert text into AI-generated videos . To do
this, and going back to Confi UI, all we have to do is click on this video section, and from here we'll find a bunch of templates. And starting with the text-to-video part, I'm also going to mention this filter here, which I find very interesting, since we can filter by the most popular templates, which are probably the ones that work best, but we can also see the most recent templates that have just been released, such as LTX 2.3, which was released this week. And to use it, all we had to do was click on the model
and select an image. In this case, we'll leave it as is, with this screenshot of Captu driving this car. We could also upload audio that we want to appear in the video, even record it from here, and from the "prom" section, we can specify what we want to happen in the video. I'm going to leave this as it is in the example. So, with this configuration, I'll click "run" again, and while this is working, keep in mind that from here we're only creating one piece of content. But if we change this number to, for example,
four, we could generate four videos at once. And that's it. After exactly 71 seconds—you can actually click on the "active" section to see how long each step took— we'll have our generated video here. But since I can also download it from here, I'll leave it here for you to see better. 2.3. And obviously, we can also make small variations to all these nodes, especially at the logical level. Similarly, if we wanted to create videos without using specific audio, notice how, from the templates section, even though the one we just used uses both images and audio,
we could also select another template that only uses images, like this one that only needs images, or other templates that can also start with just text. So, moving on to the next use case, if I select this image-to-video option, by selecting any image, like this one here, and adding a prompt here, where I just wrote that the woman runs towards a spaceship and flies to another planet, without changing anything else, if I click "run," then after just about 30 seconds we would have this video here, which I've included here for you to see better. Finally,
with the last use case in the video section, we'll see how we can convert videos, allowing us to change the style or even upload higher-quality videos. To do this, I've gone back to the templates section and I'm going to choose this first example where we can convert a realistic-style video into one that looks like an inflatable doll. To see how this would look, I'm going to click on it. It will load all the nodes that make this possible. But even though you see many nodes, notice that in the input section we only need to modify
the video we want to change. In this case, I'm going to leave this one here as an example. So now all we have to do is click on run, and it will go through all these processes until we get the final result. I've already generated it, but before we see it, notice how in the previous steps I used the Queen Image model to convert that person into this inflatable doll so that we can then see this result here, where we have exactly the same movements of the person, only modified as if it were an inflatable
doll. And with this, the possibilities are very broad. We can now not only create videos from text, audio, or images, but we can also repurpose existing videos, whether by using different styles, modifying the characters, or even increasing their quality. The results are so good that I'll show you a second example with more aggressive movements. I'll leave it here. In the next use case, we'll see how we can generate AI-powered voices to narrate what we need. To do this, click on the audio section, and from there you'll find different models. Keep in mind that if you
were filtering by Config and UI, and you also want to use those with API keys like Level Laps, simply uncheck them, and you'll see all the available models. However, in this case, I'm going to keep the ones that can be run locally and select this text-to-speech model right here. And notice that you'll always find some brief instructions here on how to use it, although it's actually very simple. We simply had to upload an audio file of the voice we want to clone. In this case, we would have a voice like this one here, and we're
going to leave this voice as is, although we could actually upload our own. Now, we can also change the text that appears in the prompt section, and for that, I've only added "de alejabi." I'm going to leave the rest exactly the same. So now I click on run, and literally 20 seconds later, we can see this result here. Hello friends of Alex Javi, welcome back to Confui. Today we're going to Explorer Chatterbox. Notice that we have an English-speaking voice here, and that's why it's also important to look at some parameters that appear in the nodes
when the result isn't what we expected. In this section, notice that we have a language selected in English, which is why it sounds like it has that English accent. So, if I change the language to Spanish here and run it again, we'll have this other audio file here. Hello friends of Alej, welcome back to Confui. Today we're going to explore Chatterbox, a voice cloning workflow capable of handling multiple voices and multiple languages. With this, we'll not only be able to create a wide variety of content formats, but we'll also have complete control over every parameter
for the final result. Let's finish with the last use case in the audio section: from here, beyond generating voices, we can even generate songs. Notice that within the audio section, in addition to simply having the audio itself, we also have another section that adds music. So, if I were to click on this section and, in the prompt section, specify the style of music I want to create—in this case, rock—and the lyrics I want to be sung in this song, and then click "run" again, in just 15 seconds we would have this 2-minute song. Let's listen
to a bit. Draw the now, watch me break free. And look, it hasn't even reached 20 seconds and we have a completely customized result, and really, for the first try, it's not bad at all. Keep in mind that here again, regarding the settings, we could also change some aspects, such as even the duration of the song we want to create, and basically any adjustment we might need. Continuing with use case number 10, we'll be seeing how we can also create 3D models. To do this, just like the other formats, we have a category of 3D
models here. And from here, to try one via API, I'm going to switch here to see all the models. Let's select, for example, this one here from Junuan. And notice that from here we can also preview what the entire 3D model would look like. To see that, all we have to do is upload a couple of images, one of the front and one of the back. So now, if we click "run, " a little later, and on the first try, we might find ourselves with this 3D model, which, as you can see, looks quite good
with all sorts of details. And ultimately, we can not only view it from here, but we can also make different edits, such as changing the background color, uploading a background image, or even exporting it directly by clicking on the three lines. Similarly, with 3D models, beyond converting images to this type of model, you can even make a 3D model be divided into different parts, especially useful at a professional level, to break everything down automatically. And like this, there are other features within this field, such as increasing the quality, as well as other models. Let's continue
exploring the categories by nodes, as they really show us the amount of things we can do. To that end, I want to highlight some filters. And you'll notice that within the template section, we have a "Popular" section where you can see which AI models are being used most, whether for generating videos, editing images, adding changes to the original image, or simply using different models according to specific use cases. Speaking of use cases, I want to show you this next category, which displays very specific workflows, such as the one we saw earlier: creating different angles of
a person we upload, generating different personal photos from one we already have, changing the clothes of a person we upload or an avatar that doesn't even exist yet but can wear clothes we might be selling, or even things as specific as gesturing so that an avatar, even if it's a different style, follows the same gestures, managing lighting changes at a professional level in an existing image, and other uses that you'll find in this tab . To finish up with this case, note that we also have a partner node section, where we have the tools that
Bappi uses. Keep in mind that here, processing would be done through the servers of the company we select. For example, here in Google, if we select Google, we'll be using Google's servers . So, if we really want to run everything locally to keep it free, it wouldn't be very useful. However, at a professional level , we might want to have everything centralized and not mind paying for specific content. Therefore, it's also important to know that you can access these tools from here, and in fact, most of them don't even use the nodes. We have an
application feature that greatly simplifies the use of each of these tools. Regarding templates, let's finish with the community. Confi UI has just launched a new tab called Confi Hub, where we can see templates created by other users so we can reuse them all. These are workflows that have already been created. Here we'll be able to see both official workflows from the platform and those created by users. If we click on one, we can choose to try it in the cloud or even download it locally to avoid any limitations. Returning to the cloud platform, keep in
mind that I've been using all these examples in the free version, and I still have over 300 of the 400 free credits remaining. But if you don't want any limitations or censorship, we need to look at the next use case. The Confi platform offers a multitude of models and templates, as we've seen in previous use cases, but we can also download models that are completely uncensored. To do this, we need to switch to CBIT AI, the platform we discussed in this video where we created uncensored images. And one of the interesting things about this is
that, beyond being able to create content directly from here, we can also click on any Olora model and, once inside, download it. This way, we can have it locally, drag it to the desktop version of Confi UI, and use it without any restrictions. But to finish up, let's see how we can install this, because the steps have also been simplified. To conclude with the last use case, let's see how we can install it locally. Returning to the platform, instead of using the cloud, we'll click on "Download Confi UI." From here, keep in mind that we'll
have three options. The previous one, which requires some technical steps to install, or now an installer that has simplified everything, with one for Windows that requires either an Nvidia or AMD graphics card. The more powerful the graphics card, the better, because that's what AI models need. And then we also have the Mac option, which is optimized for M-series processors. So now all we have to do is click on download. In the case of Mac, we would drag it to the Applications folder, and when we open it, we'll see how the installation would be done without
us having to do anything, with a very similar interface. First, it will tell us the templates we would need to run the different workflows we want to use, but we can close this. And notice that if I click on the templates section, we'll have exactly the same options, only now if I click on any workflow, like the first one we were using, image generation, it will tell me what I need to download locally so that it runs without needing an internet connection or any other restrictions. Here, in this case, I would need to download this
7 GB file, this other 300 MB file, and finally the DZ AI image template, which takes up about 11.5 GB. So, it would be as easy as downloading these three templates , and that way we'd have it enabled to run locally without any limitations. Confi UI would charge us for running it in the cloud, but if we run it locally, we wouldn't have any restrictions. The truth is, we now have a lot of flexibility to create any kind of content. The imagination is there; we just need to turn it into reality.