Categories: Tutorials

Automatic1111 Setup Tutorial: A Comprehensive Guide

When it comes to generating incredible art, there are lots of setups you need to do before starting in Automatic1111. We have observed in the community, that people are struggling a lot to get the perfect settings in this Webui. Although it’s more popular than other Stable diffusion Webuis, people seem this little complicated.

But, no problem because for easy workflow we have various tips and tricks that can make your art generation as godlike.

Basic Functions of Automatic1111:

1. CLIP: CLIP (Contrastive Language-Image Pre-Training) is the component that interprets text prompts into something the image generator can understand.

2. Datasets and Models: A dataset is a collection of images used to create a model, which is the base template for generating images.

3. Fine-tuning Models: Starting with a base model, you can fine-tune it by adding weighted images to narrow the parameters (e.g., only black dogs with party hats).

4. LoRAs (Low Rank Adaptation): LoRAs control certain aspects of the image, such as style, character type, or theme. They are add-ons that run alongside the model.

5. Merging Models: Combining multiple models by assigning weights to each, allowing for more functionality (e.g., adding cats to a dog model).

6. Pruned vs. Unpruned Models: The pruned models are smaller and faster but have reduced weights, while unpruned models are larger, and slower, but better for training and merging.

7. Textual Inversions (Embeddings): It adds images and keywords to expand the model’s capabilities without fine-tuning.

8. CFG Scale: CFG (Classifier Free Guidance) scale controls influence with the prompts given to Stable Diffusion models.

9. VAEs: It means Variational Autoencoders which handles compression, denoising, style transfer, and character enforcement.

10. Tiled Diffusion and ControlNet: Tiled Diffusion ensures seamless tiling, while ControlNet transfers poses and depth maps from provided images.

11. Image-to-Image: Using an image as a guide for generating a new image, but it’s not an exact copied version. This helps to modify a certain part of the image. 

12. Negative Prompts: Keywords for things you don’t want in the image, with bracketing and weights for emphasis.

13. Scripts: This is an additional tool like X/Y Plot for exploring different settings (CFG, steps, etc.) and Inpainting for modifying or extending images.

14. Styles: A built-in feature in Automatic1111 for saving and loading frequently used prompts and settings.

Now, to learn the basics of prompting in Stable Diffusion, you should definitely check out our tutorial on how to master prompt techniques in stable diffusion.

Downloading platform for perfect models:

1. First of all to follow the recommended setup you need to install Automatic1111 on your machine.

2. Now to use any model first thing you have to take into mind is better the model you select the better your results will be. One of the best communities where you can download the models from Civit AI where all the trained models and fine-tuned on specific data sets listed by the community. 

Not figuring out what will be best for you? 

Simply select any model and check out how many stars they got with popularity gain, comments, and number of downloads.

Some of the popular models you can try :

  • RevAnimated
  • DreamShaperXL(SDXL)
  • RealisticVision5.1(SD1.5)
  • JauggernautXL(SDXL)
  • MidjourneyMimic(SDXL)
  • Magicmix

For a better experience choose the perfect model from platforms like Hugging Face, CivitAI, etc for your relevant purpose and art style. You are free to use these models. The sky is the limit. Not only this, you can also use it for your commercial application (but consulting with the respective developer will be the best).

How to work with models:

1. Select any model (for example we have chosen the RealisticVision5.1)and you will have some relevant instructions listed with recommended parameters necessary for getting the perfect output.

2. Scroll a little and you will get all the settings like negative prompts, sampling method, CFG scale range, etc.

3. Using LoRA models with their weights can often confuse you if you don’t know how to work with this but we have made a separate LoRA tutorial and how to do training with LoRA as well. You need to check out that.

Settings for Automatic1111:

1. VAE (Variational Auto Encoder)- Get your SD VAE and clip slider by navigating to the “Settings” tab, on the left panel get “UserInterface“, move a little down, and find the Quick settings list. After selecting the methods for VAE just press the “Apply settings” and “Reload UI” to take effect. 

2. Hires Fix- This option helps you to upscale and fix your art in 2x,4x, or even 8x. For example, the SD1.5 generates good images with 512 by 768, you need to use some better upscaling models like 4x UltraSharp, 4x NMKD Superscale, and 8x NMKD Superscale. The Denoising Strength scale good range-0.2 to 0.36 (but you can experiment with it will a little tweaking)

3. Upscalings- We have enlisted some widely used models that you can select and download as per requirements:

  • 4x UltraSharp upscaler
  • 4x NMKD Superscale
  • 8x NMKD Superscale

You can also use the different versions and test choose which so ever is giving you the best experience in terms of speed, quality and effectiveness.

You can also use multidiffusion upscaler model which provides you the ultimate power to upscaling your image to 8x.

4. Sampling Methods and Steps- The diffusion models use random noise to get the perfect image. Each steps to get the clear image from the earlier one is called the sampling steps. Actually, this varies from 0-150.

There are multiple types of sampling methods used for denoising process. The steps vary from their working mechanism. You can see the below table for different Sampler Methods with its speed variation :

Sampler Type Relative Speed
Euler Fast
Euler a Fast
Heun Medium
LMS Fast
LMS Karras Fast
DDIM Fast
PLMS Fast
DPM2 Medium
DPM2 a Medium
DPM2 Karras Medium
DPM2 a Karras Medium
DPM++ 2S a Medium
DPM++ 2S a Karras Medium
DPM++ 2M Fast
DPM++ 2M Karras Fast
DPM++ SDE Medium
DPM++ SDE Karras Medium
DPM fast Fast
DPM adaptive Slow
UniPC Fast

5. Storing various models(checkpoints) and files:

  • Save your negative embedding into “stable-diffusion-webuiembeddings” folder.
  • If you want to save models like Normal image rendering for Stable Diffusion base models, then you should put them into “stable-diffusion-webuimodelsStable-diffusion” folder.
  • To save the Lora models, use “stable-diffusion-webuimodelsLora” folder.
  • For more upscale models select “stable-diffusion-webuimodelsESRGAN” folder.
  • For VAE choose “stable-diffusion-webuimodelsVAE” folder.
  • If working with ControlNet then save your models inside the “stable-diffusion-webuiextensionssd-webui-controlnetmodels” folder.

Important points when working with the model:

All these base models, Lora models, and ControlNet models is they need a specific version to be used for image generation. 

For example, if you have a Stable Diffusion XL(SDXL) base 1.0 model as a checkpoint then only the Stable Diffusion XL(SDXL) Lora models as well as Stable Diffusion XL(SDXL) Negative embedding and its relevant settings get work with it, and generated optimized results. 

For a better explanation, we have a tabular format below:

Base Models(checkpoints) Lora model Neg Embedding ControlNet model Parameters
SDXL SDXL Lora SDXL neg embedding SDXL ControlNet SDXL Parameters
SD1.5 SD1.5 Lora SD1.5 neg embedding SD1.5 ControlNet SD1.5 Parameters
SD2.1 SD2.1 Lora SD2.1 neg embedding SD2.1 ControlNet SD2.1 Parameters

Now, you must be wondering where to get these relevant settings?

Well, whenever you want to use any models from CIVITAI, you can get those parameters, configurations, and embedding them into its description section.

But for VAE and Hires Upscaler models these can be used with any above mentioned configurations. Their is no restrictions.

Conclusion:

Automatic1111 is one of the popular models used by the community. Due to its complicated and multiple functionalities, you will be going to bang your head. But, just putting a little effort into learning will make you master this AI era in the long run and you can leverage the ultimate power of image generation in Stable Diffusion.

If you found any issues related to Automatic1111, then you can raise your problems in the issue section of the official GitHub repository. 

admage

Share
Published by
admage

Recent Posts

InstantIR: Restore Your Images

Figuring out the model that can fix your low quality pictures? Now, restoring your low…

4 days ago

Omnigen: Next-Gen Image Generation & Editing Tool

Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…

1 week ago

Create Engaging Videos with Mochi1

Mochi 1, an open-source text-to-video diffusion model has been released by Genmo.  Trained with 10…

3 weeks ago

Local Installation of Stable Diffusion 3.5

So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…

4 weeks ago

Video Depth Mapper: Efficient 3D Depth Mapping Solution

Due to the extreme diversity of video content like motion, camera panning, and length, the…

1 month ago

Top 30 Negative Prompts for Reliable Diffusion Models

 You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…

1 month ago