InvokeAI: Run Stable Diffusion on Google Colab and PC

install invoke ai webui on google colab

After using
the Stable Diffusion for a long time, one day we were facing many disconnects.
So, we researched the internet and came to know that Google Colab has blocked the use of Automatic1111 with Stable Diffusion due to a large amount of computational power consumption. 

But, we found many cool tricks to run and install Stable Diffusion for free on Google Colab and on PC and we have the solution to fix it and we are here to help you learn how you can
do like us.

Invoke Ai is a WebUI that helps to create incredible AI images using Stable diffusion. With an easy user interface users can also download and install multiple models with just a single click. It can be hosted on the Cloud or installed on a local machine like Automatic1111. 

The Frontend UI has been designed using the Javascript Framework (ReactJS). The User interface is specially designed for professional editors, artist enthusiastic who are engaged to work on their PC, Mobiles, or Tablets.

Some Fantastic Features:

1. It supports the ckpt extension files and Stable diffusion models, which can be downloaded from Hugging Face.

2. It can easily load Stable diffusion Models version 2.0, 2.1, XL, and XL Turbo.

3. Simple User interface for Tablets and mobiles.

4. Connecting Nodes benefits for clear understanding.

5. Customizable pipelines for artists that can be shared instantly.

6. Adds rich metadata within generated images helps during projects.

7. Helpful for drag and drop.

Steps to install Invoke AI on Google Colab:

Run Stable Diffusion on Google Colab

1.      First of all go to the Google Colab link
for setting up the environment :

        https://colab.research.google.com/drive/143_3pv8csybgkKnWyDVCi8bhVo16_5AI?usp=sharing

Run Stable Diffusion on Google Colab

2.      Go to run time presented at the menu bar and click Change Runtime Type. 

    Run Stable Diffusion on Google Colab

T   Then just select T4 GPU (by default it
will be selected as CPU) and click Save.

Run Stable Diffusion on Google Colab

3.      Click the Connect button presented in the top right corner, after connecting to the GPU environment.

4.      Now, run the first cell (STEP 1) by clicking the play button and then again
click Run Anyway (for confirmation
from the Google Colab terms and conditions). 

        This can consume some time because the models are huge in size which takes time
to download and install in runtime. In our case, it took around 8 minutes. 

        After getting installed, you can see the green colored check mark which
confirms that your code has been installed.

Run Stable Diffusion on Google Colab

5.      Now, before selecting STEP 2,
just click on the blue link that is provided in Step 2. There you can check and
select whatever models you want to install.

So, by default, the Stable Diffusion Realistic version has been selected.
If you want to install any version, simply change from False to True. But, here we are installing the default model.

Now, click on the play button to run STEP -2. This will again consume some time like the previous one. In our case, it took around 4 minutes. 

After
getting installed, you can see the green colored check mark which confirms that
your code has been installed.



6.      At last just run the third cell that
is STEP-3.

7.      Now wait for a moment until you see the
link like this: “
https://127.0.0.1…” . 

       Just
open to the first blue link that will be going to appear after our model gets installed and hosted.



8.      You have the new UI provided by
Invoke AI. Here, we can feed :

·       
Positive
Prompts

·       
 Negative Prompts

·       
Number
of images

·       
Steps

·       
CFG
Scale

·       
Types
of Stable Diffusion Models

 

1.                  


        Now, if you want to install and use
your own model then just click on the Model
Manager
icon (cube-like symbol) presented over the left panel of the
dashboard.    

       


      Select import models and
then on the
model section text box
copy and paste the link of your model. We have used the pre-trained model link
from the 
CIVIT.AI website. Also, as an alternative to it, you can also use the Hugging Face platform to download and use the pre-trained models.

.     On, this website you can just select your pre-trained model whatever you like.  Go to the Download button right-click on it select copy link address and paste the link onto the model section of Invoke AI.

3.      Select Add Model. This will take some time to download and install the
model. So, just wait for some time. A pop message will be seen “Model added”. For verification, you can also
check under the model section what model has been installed. So,
that’s easy its. Isn’t it?


Sometimes, you face errors while installing Invoke AI on Google Colab free account. So, you need switch to the Colab Paid version instead.

 

Steps to run Invoke AI on PC

For running Invoke AI locally, you need to take care into mind some requirements and recommendations which are provided below:

Requirements:

1. Graphics Card – NVIDIA GPU(minimum 4GB VRAM or more). We experienced that using 6 GB works better for SDXL models. If using Mac PC then M1 or M2 chip is recommended, but a little bit slower. For AMD GPU then, use 4GB VRAM or more. 

2. RAM – minimum 12GB.

3. Operating system – Windows, MAC, Linux.

4. Disk Size limit- 12GB needed for storing trained models. 

Installation:

Python install 3.10x
Python 3.10x add to path
1. First of all you need to install Python 3.10x version and ensure to check the “Add to path” option while installing it to setup the environment variables. Right now Python 3.11x is not supported.
Download Visual Studio for c++ libraries
2. Download and Install Visual Studio by going to their official page.
Check  c++ packages
While doing installation check the C++ necessary Libraries.
invoke ai github repo
3. Go to the following Github repo link , move to the “Quick Start” section and, and click on the “Latest Release Page“.
invoke ai installer
invoke ai extract
4. Download by Clicking on “InvokeAI-installer…..” (zip file) link extract it using a zip extractor (WinRar or 7Zip) and extract it and move to the InvokeAI folder. We are using WinRar for extraction.
Enable Winlog path enabled
5. After extracting, if you are a Windows user then you will get a file “WinlogPathEnabled.reg“. Just open to enable a long path on Windows.
invoke install bat file
6. Now, you will see a bat file named as “install.bat“. Just click on it to start installation using the command prompt.
press key to continue
7. A new command prompt will be opened with a prompt message “Press any key to continue…“. Just press any key to initiate the installation process.
press key to yes
8. Again a prompt box will be seen required to select the default location directory for Invoke AI files to get installed. Just press “y“.
select suitable option for GPU
9. Now a new message will prompt you to select your suitable options to install Pytorch. There will be 4 options.
– The first option is for an NVIDIA GPU (With CUDA) machine.
– Second for NVIDIA GPU (With CUDA, DirectML, and ONNX) users.
– The third one is for users who don’t have a suitable GPU or GPU installed (In this case CPU will be used instead).
– Last option for those who don’t know what to do. Choosing this option will intelligently select the required settings, according to the machine’s compatibilities.
But, we are selecting the first option as we have the NVIDIA GPU installed machine.

Set manual configuration
10. Wait for a moment to get it installed. After finishing the installation, a new prompt message will be seen to select the manual or automatic setup. Press the “m” key for manual configuration.
   
Setup VRAM and RAM size
Use the up, down, left, and right arrow keys to navigate to the respective options. Here, you need to keep some points in mind. It is recommended for system RAM reserved to 30-60% of the total size for running the application and for GPU (VRAM) set to 1/3rd of it for allocation, and leave other settings as default. Accept the license and proceed to select NEXT.
select starters option
11. Select “STARTERS” options then choose those models which are well known for stable diffusion base models which we have selected as shown in the above image.
The selected models will be indicated with an “X” mark.
select Controlnet option
12. Now, go to the third option “CONROLNET“. It helps to set up Controlnet models. Here, the Controlnet models are non-SDXL type so we will deselect them. If it’s deselected, you will see a removed “X” mark like you are seeing on the above image.
select TI adapter option
select IP adapter option
Into “T2I-ADAPTERS“, choose options that have “SDXL models” like we have selected and shown in the above image. Here, “Openpose” hasn’t been included which needs to be manually installed. In the case of “IP-ADAPTERS“, “LORAS” and “TI-EMBEDDINGS“, you have to install them manually.
Apply changes and exit from menu

Selected model installation progress
13. Now, go back to the “STARTERS” and select “APPLY CHANGES & EXIT“. It will take some time to download and install the selected models and you will see the progress in the command prompt.
select option 1 to start InvokeAI webui
14. After getting all things installed, just close the command prompt. Move to the installation folder and launch the file “invoke.bat” to start InvokeAI Webui. Now, press the num key “1” to open your browser.
copy the local host link for invoke WebUI
After a few minutes, you will see a local host address “http://127.0.0.1:9090“. The link will be exposed on the command prompt. 
InvokeAI WebUI dashboard
Copy and paste into the browser to open Invoke WebUI and enjoy the image generation with InvokeAI.

Conclusion:

Well, InvokeAI is one of the alternatives to Automatic1111 WebUI helpful for generating images like other WebUI. Like others here artists can download the model and use it in no time with just a few clicks. For more help, you can also join their discord server.