hello my name is Callum also known as wonderlots today I'm going to show you how to use Adobe Firefly the new generative AI program and also I'm going to explain why I think this is such an interesting product given how many different AI products there are out there right now what Adobe is doing differently and how they are planning on developing this product in the future to make it even more Creator Centric I personally am really excited about Adobe Firefly for a couple different reasons the first of which is that aside from the output looking
incredibly detailed and impressive they have decoupled the ability to iteratively modify the style of the output without having to type in the prompt they have different forms of doing that you can modify the aspect ratio the content type the style color tone lighting and composition so that's a really powerful tool that I haven't seen in any other AI generator at this time and I think that user interface is going to make it just really popular and easy to use for a lot of people another reason why I'm excited about Adobe Firefly is because they have
trained their diffusion model their AI generative model based entirely on Adobe stock along with openly licensed work and public domain content where copyright has expired so to me that's just looking at the generation of AI content in a completely different way than what I've seen with other platforms and I'm excited to get into it more Adobe Firefly is currently in the beta phase which means that in order to get access you have to apply to be on their wait list I applied two days ago and received access yesterday it only took one date to apply
is quite easy you just go to adobe.com you can search Adobe Firefly beta what you can see here on the side is they're already demonstrating it where you are able to have a text to image prompt that will generate an image which can then be populatable in the future now not all of these features are currently available but I will get into that more later for now let's take a look at what the program is like right now Adobe Firefly is currently a web-based interface at firefly.adobe.com in the future they said that it will be
integrated into other Adobe software like Photoshop and illustrator and they will also be bringing in video as well so there will be a lot more potential for using this technology at this time there's only two features that are currently available for you to select from there's text to image and text effects they have recolor vectors coming soon and they have a lot of potential features that are in Exploration that I will get more into later in the video I think this is a good point to talk about what is generative AI generative AI uses what's
called a diffusion model based on a text input to generate unique AI output here you can go through their gallery and you can see the different prompts that were used the text input and the image output that was provided so here a muted Wonderland Studio this one jumps out I mean it's a futuristic inspired Border Town with neon lights on the edge of a com reflecting Lake on Mars with bioluminescent plant and rocks at night so effectively what you're able to do is you're able to turn text into images which seems obvious but there is
a new language that is being developed here called prompting and that's where you prompt the AI to have an output in order to learn more about prompting it can be helpful to scroll through and see if there are any images in the gallery that inspire you that can give you an indication on the type of prompt that you are looking to have one thing that adobe has done differently from other AI generative models like mid-journey and Dally and others is that adobe trained their generative AI model on a data set of adobe stock images licensed
work and public domain content where copyright has expired so one of the major differences between Adobe and the other platforms is that all of the images used to train the AI model came from free to use or licensed content so why does this matter this matters because a lot of people feel hesitant to get involved with AI generative art because the models were trained off of data that came from the internet as a whole that involves a lot of images and a lot of works that artists did not necessarily give permission for so there has
been some controversy over this topic and honestly it's one of the reasons why I haven't played around too much with the other AI software and why I was so excited to get started with Adobe Firefly I think that we are going to continue developing better AI ethics and better internet practices to help protect creators in the future and I'm glad to see that adobe is taking steps Now to create a program that considers the intellectual property rights of the creators that they are trying to help something that I'm interested in using Adobe Firefly for is
to create images for my blog posts on my website that are more tailored to the specific type of content that I am talking about rather than going to pixabay or adobe stock images that and using some generic image that has been generated by someone else or has been taken by someone else I can now generate my own image in a way that I think more accurately reflects the vibe of the content that I'm trying to share so here I'm going to search for a 3D image of the earth and click generate so you notice that
type of prompt was too short it let me know that there's probably not enough detail here to generate the type of output that I'm looking for so here I can go and I can modify it and see how the diffusion model generates a different AI output wow so you can see here that the detail is actually quite good for the the image it did a really good job showing both day to night on the earth and a 3D view from space What's Happening Here is based on this prompt the diffusion model generates four different outputs
and that gives me the option to select whichever one jumps out at me the most there are a few other options here that I think are some of the most interesting features of adobe Firefly one of the main ones is that instead of having to modify everything by The Prompt I actually have options to modify the output using different filters along the side for example I can change the aspect ratio it's interesting how modifying the aspect ratio completely changed the perspective so I'm going to go back to one of one what you just saw there
too is that it didn't delete my existing prompt output just by changing the aspect ratio I'm able to toggle back and forth without having the output modified so if you accidentally change or you realize that after you've changed something you would prefer to have the previously generated output then you can always toggle back and forth between them it's interesting here too how you can see there's somehow Two Earths being generated so it clearly didn't understand the prompt there which means that I can either select one of these images or I can modify my prompt to
make it clear that I'm looking at the Earth from space another great feature is that I'm able to modify the content type here so I can switch between photo art graphic or no particular content type this is again different from other AI generative platforms where you have to type in a photo in the prompt or ultra realistic photo in the prompt in order to get that type of output what's nice is with this content type generation I can toggle back and forth between the different styles and I'll be able to see how they compare without
losing my previously generated prompt now going down to Styles you can see that there are many different forms there are a lot of different ways to modify the output it based on different style formats based on different themes based on different techniques based on different effects materials concepts with these ones however this is actually modifying The Prompt itself it's not modifying the existing output if I wanted to see this for example as a science fiction version you can see it now added science fiction to the bottom left here and I have to regenerate the output
that means that I'm going to lose all of the existing output that I already have oh that's nice so I really like this one in the top right where you can see kind of the depths to the clouds you can see that the sun is on one side of the earth here and that the Knight is on the other side so I personally think that that's really cool and if I don't particularly like the other outputs perhaps they look a little strange the AI messed it up a bit what I can do is I can
hover over and I can go to show similar results so what this is doing is it keeps the existing image here and modifies the diffusion of the other three outputs to be as close to this prompt as possible I can continuously do that I can keep refreshing as many times as I want to and continue to try and get different outputs based on what I'm looking for for example maybe I like this one now and I could then click here and I would be able to use this image as the seed for generating the other
outputs this is actually a really powerful tool in mid-journey you would have to generate a seed and control the seed and continuously input the image so adobe's interface here is actually quite excellent and makes it really simple for a new user a beginner to AI generation to modify their input in a consistent way that they can iteratively improve the quality of their output so let's say that I decided despite all of this output I am still interested most in this image here I can click download now the first time you click download you can see
there that adobe apply is their content filter so that content credential that popped up there is a way for Adobe to detect what type of image has been created for example the Firefly generated content tagged it with an indicator that it was created using generative AI this is to make sure that people are not confusing AI with photos for example it's a way to introduce more authenticity into the internet if I open it up you can see that because this is beta this is not for commercial use they have included a watermark but the resolution
is actually quite good on this one thing that I wanted to show you is that you're able to take the output that you like the most and you're actually able to use that image in the prompt itself as a reference image you can see here now below that image has formed a part of the prompt and all of the outputs now look a lot more similar to that initial input and what's interesting too is you're able to slide between how much of the reference image you want to impact the prompt versus how much you want
the text prompt to impact the output image if I move closer to the reference image it will regenerate the output and it'll make them look a lot more similar so this is a really great way to fine tune the output of something that you really like already without having to start from scratch but if you want to modify a little bit more you can go more to the prompt side and it's going to introduce more diffusion you can see here they're quite a bit different now now one thing I want to point out is that
unfortunately at this time if you try to modify the aspect ratio with a square input for example an aspect ratio of one to one you try to make it widescreen the output gets a little bit messed up because it's trying to stretch to fill the rest of the image if you are using a reference image as part of your prompt then you should stick to the same aspect ratio let's see if I wanted to add a little bit more detail let's say I wanted to make it hyper realistic I can add a theme on top
of the style that I've already applied on top of the the content form and you can stack all of these themes on top of each other and regenerate the image continuing to use the reference image as the initial prompt in summary based on the aspect ratio the content type and the Styles I think it's really cool that adobe Firefly allows you to iteratively modify the generated output I think this adds a lot more fine-tuning control to creators and will make it a lot easier for people to get involved in AI generation in the first place
and be able to really apply their own style to the output based on a different combination of features prompting and reference images now let's look at the last element which are ways to modify the color and tone the lighting and the composition but similar to the themes and the Styles you can apply this over top of the image at this point too it's worth noting that if you keep the reference image in there it will not necessarily apply the color in tone aspect to modify the output because it's pulling in too much much from the
reference image so if I want to modify for vibrant color for example I have to shift more to the prompt as the waiting for the generation as opposed to the reference image so you can see here I modified the output it adds a little bit of color if I wanted to change that to more pastel I can swap that out perhaps remove the reference image and then see how the AI will generate the output it looks a little pastelly honestly not super great perhaps that's not something that really works with my prompt as much even
here it introduced some black and white but it kept a lot of the color now that's probably because I have too many styles applied on top of each other I can remove a few of them and see how it looks that's a little better so that's a little bit of color to show the last features the lighting and composition let's try a different prompt I know that actually looks quite lovely you can see here I generated an almost photo realistic version kind of a pastel graphic design one another photo based one with a really blurry
background and more more of an animated version so if I want to modify the lighting I can make it at Golden hour for example apply the filter below and click generate you can see now it modified the sky it's got the golden hour Lighting in the background and if I want to change the composition perhaps I want to make it have a close-up you can see how it zoomed in on the fire if I want to change the composition again perhaps shot from above it swaps out the previous composition and modifies it to show being
shot from above the fire so now let's take a look at a few more of the Styles here for example let's say I want to look at it as if it were a layered paper you can see how it completely modifies the entire style output for layered paper this is a really powerful way to flip between different styles depending on the type of art output that you would like to use in your blog post on social media generally however you would like to use this and that just gives a lot of power to creators to
modify their content in a really easy way to help keep up with the demand of algorithm rhythms and also customize your content in a different way the last effect that I want to talk about is the text effect this is where you are able to take a letter or a word and apply a prompt to fill in the letters so this is really fun for graphic design if you have a particular work that you're trying to create but you don't really feel like spending hours trying to draw in or grab different content fill from different
places you can use Adobe Firefly as a Content fill for the different lettering layered colorful socks Adobe will always start with firefly as the base words if you click on the existing prompt suggestion but you would be able to modify it here as well similar to the text image you can go through and you can modify all the different variations of the text output in a way that you can iteratively improve the style that you're looking for you can see below here it has four different versions of this first letter of the word and I
can click through to see how it would be applied to the rest of the word let's try something else so this is actually really cool you can see that not only did it apply the campfire to the text to the lettering but it also extrapolated and put some stars in what looked like Sparks coming off the wording as well so it doesn't just fill in the text itself it also applies Graphics that go outside the bounds of the letter so I actually like that one let's download it you can also submit to Firefly Gallery if
you want other people to be able to take a look at what your prompts are and to be able to see what you're doing to provide inspiration for them like we talked about at the beginning of the video so it's nicer you can change all the font I actually like this one a lot more it feels more like my vibe it's a little busy so maybe I want to add some minimalism you can see there's a bit more gray in the letters it seems to have removed some of the elements within the text and then
it kind of expanded the Stars to continue going a little further outward from the letters if you're trying to figure out what you want your logo to look like what you want your brand of wording or text to look like you can experiment here and try and find something that is a lot more customizable than some of the logo generation platforms that I've seen in the past now let's take a look at some features that are up and coming one of the features that I'm most excited for is the ability to train to teach Adobe
Firefly based on your own object or style so what that means is I can for example take all of my photos and dump them into Adobe Firefly and train a version of The Prompt with my own images so that I can then use the generative AI to modify my existing style I think that this is a really powerful tool to be able to customize my content output but in a way that still retains my Unique Style as this feature comes out I will make another tutorial video another really interesting one is the text to Vector
now what that means is you would be able to select specific elements in the the image and move them around in Photoshop for example you would also be able to animate it so you can instead of painstakingly drawing all of these different elements separately use AI to generate some stock images for your animations and then move it around almost instantly another interesting feature is to be able to use kind of like the content aware function in Photoshop to extend the aspect ratio of your image so that you can use it in different ways you're also
able to use in painting which means that you can cut out a particular element and then apply the generative AI model to only fill in the gap of the area that you've removed if you like most of the image but something seems a little bit off and you want to just modify a portion of the image you're also able to generate patterns 3D models can be converted to images and then modified to have different styles you can create a brush based on an image and then modify the the way that you paint using that image
you can take different styles like sketching and turn it into a colored image and you can also create templates now looking at adobe's beta page there's also a few more features you can see here it's talking about the applying your Unique Style to have customized diffusion and you'll also be able to naturally mix different photos together so you can take two images two of your photos two of your previously existing works and you can put them together and have the AI blend them in a way that looks natural you're also able to generate different style
variations of 3D objects which is a pretty powerful tool you're also able to modify video which that is something I'm really excited for as well if you're making a film you're making something for social media and you've got your photo that you've already trained the data set on but you want to apply a different vibe based on whatever film you're trying to create you can change the mood without having to go search for hours looking for the perfect stock footage you can also use it for graphic design you can generate a high quality Vector variation
based on on the drawing so you can do the sketching yourself and then you can have Adobe Firefly convert it into a model that's modifiable for your own brand now again these are all features that are coming in the future but together they really make up a large aspect of why I'm super excited to continue using Adobe Firefly and to continue experimenting with it in the future thank you so much for watching this video I hope that you found it informative and inspiring the more I look into AI the more excited I get at how
easily creators are going to be able to leverage this technology to augment their creative output I know it can seem like a scary concept to have a computer generating art like this but there are many ways that creators can customize the output while still maintaining their style I'll get into that more in another video if you have any questions or concerns about Adobe Firefly or genitive AI more generally please feel free to leave them in the comment section below if you found this video helpful I would really appreciate if you could like And subscribe I
will be making more AI tutorials in the future and would appreciate any feedbacks that I know how to help people the best that I can thanks again for watching and have a great day