When it comes to creating images of a particular style using AI, or designing consistent images using Stable Diffusion; LORA models are one of the best ways that make your life easier. But, for generating that type of image we need to train a particular model with a set of images to get the perfect results.
LoRA is the abbreviation for “Low-rank Adaptation” and is a way to fine-tune the stable diffusion models.
We have other different methods for tuning the models like Dreambooth, Hypernetworks, and Textual inversion, but the problem here is that they consume much GPU power, and the end results are sometimes not up to the mark.
This is the main reason LoRA plays a role because these models are way smaller in size as compared to the rest and the results are way better.
For using Lora models it’s mandatory to have the Stable diffusion models enabled like Stable Diffusion 1.5, Stable Diffusion XL, or AnyLoRA checkpoint (available on CivitAI).
Apart from the training part, multiple platforms like Hugging Face and CivitAI enlisted various pre-trained Lora models which you can use efficiently by just downloading it. You will be surprised that some Lora models are smaller and models like Dreamshaper which is 5.5 GB. The size basically depends on how long the model has been trained with different data sets.
In Lora models, only the little part of the model gets trained, and not the whole one creates a big influence in generating image quality. This technique is inspired by Artificial Neural Network and the number of parameters is also less so the GPU power consumption to train the model will be less which makes the model more prominent.
There are multiple platforms available to download Lora models which is listed below:
(1) Hugging Face: Its one of the largest community for learnings and testing your machine learning models.
(2) CivitAI: This platform is specially famous for Stable diffusion models where you can easily download various models enlisted.
(3) GitHub: This is the well aware platform and proved to be the goldmine for developers. It provides a better way to sync and push your projects instantly using the CI/CD pipelining into their servers and make projects public so that you can share when working in team or make them private as well.
Popular stable diffusion models like Roop, Reactor or even WebUIs like Automatic1111, Fooocus are getting managed easily with Git environment. So, you can use this platform as well for hosting and search a wide varieties of fined tuned LoRA models.
(4) SeaartAI: This is one of the alternative to CivitAI where similar facilities have been provided gives you more options.
For illustration we have shown the process to donwload and use from CivitAI, the rest of the process will be same no matter which so ever platform you use.
– First search and select any LoRA models and download it.
After downloading move the downloaded file to “stable-diffusion-webui/models/Lora/“. No matter whether you are working locally or in Cloud, its recommended to use this path only otherwise the Diffusers(Stable Diffusion library) can’t access it and the downloaded models will not going to work.
Now, open your Stable Diffusion WebUI. For illustration, we are using Automatic1111, click on the checkpoints option and various base models will be listed below. These models works as the base for Lora models.
Click on red button to open different training type dashboard where Lora tab will also be available.
Then different options will appear with Textual inversion, Dreambooth, Lora, Hypernetworks. Select on “Lora” tab and you can see the recent the lora model which we have downloaded is showing here. Just select the model and put the text which was instructed in CivitAI and add that text into positive prompt box.
So, the technique here is that whenever you want to use any specific LoRA model you should use the related text(trigger word with LoRA weights) into the positive prompt box with the usual prompt you want to an image. Like in example:
“<lora:ghibli_style_offset:1>” has been added automatically into the positive prompt box. Here, “ghibli_style_offset” is just the file name which we have downloaded and number “1” is the model’s weights which decides how much influence the model will be (range is from 0-1).
Now just copy the trigger text of your model from CivitAI add text given in the instruction “ghiblistyle” in our case. If you use other models then it will be different text in that case. After that we can add any prompt to generate the image using this model.
You can also just copy the whole setting data from the particular model profile by selecting on “Copy Generation Data” button.
Now, paste into positive prompt box and click on blue colored “arrow” button to setup all the settings.
If you get some error then you need to recheck that the LoRA model name is correctly used or not and make sure not to use the space while add any LoRA style.
You can use multiple LoRA models by adding them with one go. First download your LoRA model from platform.
For illustration we have downloaded another LoRA model that is the “Alexandra Daddario” from CivitAI. Then you need to put the LoRA models into “stable-diffusion-webui/models/Lora/“.
Then just restart Automatic1111.
Go to red button and load new LoRA model. Now for using both LoRA models and generate using their weights, click on both and their respective value will be added automatically. Now just add their their trigger words into prompt box like we have shown in the above image.
In our case we have used “alexandradaddario” (for Alexandra Daddario model) and “ghibli style” (for ghibli style). It can be different in your case.
This is to take into mind that how much weights you want to use into your art. The weight ranges from 0 (minimum) to 1 (maximum). Like for example you want a cartoonish image of Alexandra Daddario then you can set weight to 0.8 (for alexandra dadaddario) and 0.2 (for ghibli style) and the actual lora settings will be “<lora:ghibli_style_offset:0.2> <lora:alexandradaddario_offset:0.8>“.
Then add your prompt as usual and click “Generate” button and you will get the image with multiple LoRA model effects.
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…