Welcome to the NAI Quick Start Guide! The purpose of this page is to get you up and running with the "original" NAI configuration running on WebUI as quickly as possible. After completing this guide, your system will be ready for alternate models, etc. and you will have experience navigating your way through the files, folders, and settings you need to set everything up.
This guide covers Windows, Nvidia, AMD, or CPU rendering, as well as Linux rendering. More configurations later?
Operating System = Windows 7 or newer
System Storage = 20GB
System RAM = 16GB
GPU = Nvidia Maxwell (GTX 7xx) or newer
GPU VRAM = 2GB
* If you are unsure about any of these specs, you can use diagnostic software like GPU-Z or Speccy.
* CPU toaster bros follow instructions below, CPU-specific steps will be noted.
You want "part 1" (total size approx. 52GB). Links are everywhere, choose the one that you trust won't pickle you.
You only need to select two things to download:
for a total of 4.75GB of content.
Skip the other files (and "part 2").
At the moment there is no native AMD support for WebUI+Windows. There is a thorough AMD guide but we only need specific sections.
- Windows + AMD users follow the Docker guide and then the Arch guide section.
- Linux + AMD users follow just the Arch guide section.
After this, skip ahead to section First Run and Configuration.
While that downloads, let's install Git, Python, and WebUI.
- Git: https://git-scm.com/download/win
- Latest version is ok.
- Activate option
Windows Explorer integration > Git Bash. All other defaults are fine.
- Python: https://www.python.org/downloads/windows/
- Latest version of 3.10 is ok.
- Make sure
add to PATHis enabled. All other defaults are fine.
Open Windows Explorer, navigate to the folder you will be installing WebUI to. In this example I am using
Your NAI files will end up here as well, so make sure you have enough space on the drive for everything!
Right-click and select
Git Bash Here (you did make sure to select that option during installation, right?)
Enter the following command into the new Git window:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
It'll create a folder and download stuff into it:
Navigate back to the base
stable-diffusion-webui-master\ folder, and you will see a file named
Before we run it, however, we have to consider how much VRAM we have.
You may encounter "out of memory" errors if you do not configure this correctly!
- If you have 2GB of VRAM or less, you will need to use
- If you have 4GB of VRAM or less, you will need to use
- If you have >4GB of VRAM you do not need any additional options.
Edit. We will add our options to
set COMMANDLINE-ARGS= like so:
Save the file then continue with section First Run and Configuration.
These steps may cause errors if you use them to force CPU rendering while you have a compatible video card installed.
It's recommended for you to make a backup of launch.py before editing it.
- Open the file
launch.pyin an editor. Search for the line
def prepare_enviroment():. We will be editing this section.
- Replace the line beginning with
torch_command =to the following:
torch_command = os.environ.get('TORCH_COMMAND', "pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu")
- Replace the line beginning with
commandline_args =to the following:
commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test --precision full --no-half")
Once everything's installed, navigate to
We will place 2 files here from our downloaded content:
We need to rename them in a specific way. To not lose track of them, we will name them something useful.
webui-user.bat (or run
webui-user.sh if you're a Linux user), to launch WebUI. On its first run, it will download and install additional modules.
This step may take several minutes. You should get some coffee and fill a water bottle, because you will likely be proooompting for the next several hours!
You will know it's ready when you see the line
Running on local URL: http://127.0.0.1:7860
Let's open up our favorite web browser and navigate to this address now:
You will not be able to access this page from another system or the internet without further configuration.
Now for our initial setup and the Hello Asuka test.
First, activate the NAI model by selecting
animefull-final-pruned.ckpt [925997e9] in the dropdown in the upper left of the page.
Verify the files loaded correctly by looking for the following lines in the log:
Loading weights [925997e9] from D:\diffusion\stable-diffusion-webui\models\Stable-diffusion\animefull-final-pruned.ckpt
Loading VAE weights from: D:\diffusion\stable-diffusion-webui\models\Stable-diffusion\animefull-final-pruned.vae.pt
Next, head to the
Settings tab at the top, and make the following changes. There's a lot here, so use page search if you get lost:
- Stop At last layers of CLIP model =
- Eta noise seed delta =
Click the big
Apply settings button at the top. You will see a confirmation that the changes have been saved:
Time for the Asuka test! This is the image we're trying to make:
Go back to the
txt2img tab. Use the following values in their respective fields:
- Prompt =
masterpiece, best quality, masterpiece, asuka langley sitting cross legged on a chair
- Negative prompt =
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name
- Sampling Steps =
- Sampling Method =
- CFG Scale =
- Seed =
Click the big
Generate button in the upper right, and wait a few moments for it to process. You should end up with a matching image of Asuka:
We want to be 95-100% identical to the target image. If your Asuka image doesn't match, refer to the troubleshooting guide for more info.
Help! I got a solid black image!
Don't panic! This can happen with some video cards (anons have mentioned 16xx series). Add
--no-half-vae to COMMANDLINE-ARGS and restart WebUI. If that doesn't resolve it, replace this option with
If you have >4GB VRAM and you're using
--no-half, and you encounter
Not enough memory errors with modest image/batch sizes, please try adding
--medvram as a troubleshooting step.
It's time to PROOOOOMPT! The settings you completed the Asuka test with aren't a bad place to start.
- Begin your prompts with
masterpiece, best quality,and add a short, descriptive sentence, such as
a girl with an umbrella in the rain.
- Start with the following in the Negative prompts:
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
- You can set the Seed to whatever you like, or use
-1to randomize it and go on an adventure!
- To exit, you only need to close the console window with the usual red
Experiment! Most of the settings do something to the output, and others may recommend settings to you, but there is no one best setting.
At this point you have everything you need to get started. If you wish to learn more, please read on.
Here I will give a non-technical explanation as to what most of the fields on txt2img do, along with "sane" values to use that'll keep image generation predictable for you.
If you want the tl;dr version only pay attention to the highlighted text!
- PROMPT: What you want the AI to think about. Whatever you put in here, the AI will attempt to include it in the output.
- Start with "masterpiece, best quality," and keep your tokens under 75. To the right of the text box is a counter, which will read as x/75. This is the number of "tokens" or things the AI is thinking about. The AI will let you go over, but staying under 75 tokens yields more reliable output.
- Protip: Try to group related items together in a short phrase. For example: "a busty android girl" vs "girl, android, busty". With the second prompt you're more likely to end up with a girl with a robot at her side, and who knows who gets the bigger chest!
- NEGATIVE PROMPT: What you want the AI to avoid. Whatever you put in here, the AI will attempt to avoid having it in the output.
- Start with the NAI default above, and be sparing in the number of additional items you add. If you overdo it, you will back the AI into a corner and you may end up with the same image over and over!
- SAMPLING STEPS: How long the AI spends working on the image. General rule of thumb is the longer the better, but diminishing returns.
- Start with 20-70. To save time, try to use as few steps as possible while getting an output you're happy with.
- SAMPLING METHOD: How the AI thinks about your image. Different methods use different approaches, and in testing you may discover some yield very similar results.
- Euler and Euler a are popular because they generally produce predictable results.
- Protip: While Euler tends to get sharper with more steps, Euler a varies output greatly with steady quality starting from around step 20, that gives it the potential to give good output with fewer steps.
- WIDTH/HEIGHT: How big you want the output. This correlates to the time and amount of VRAM needed per output, so you will encounter a VRAM memory error at some point.
- Start with 256-1024px in both directions. Potato systems may not even be able to go larger than the 512x512 default, it depends on your system spec.
- CFG SCALE: How "focused" you want the AI to be on your prompt. Lower value = less "focused", higher = more.
- Start with 5-15. Going below this range may yield random content, where as going too high will limit the variety of outputs.
- SEED: Source number for the beginning of AI processing. Two images with the same parameters and same seeds should yield identical pictures.
- Start at -1 (random). Until you find a composition/arrangement you like, you can keep rolling random seeds. Once you find the image you'd like to refine, you can save it by clicking the "recycle" icon next to the box.
- RESTORE FACES: Extra AI pass to correct errors on faces. You can pick from two engines in Settings.
- Start with it off. Multiple anons claim that more often than not the "fix" will look worse than the original image.
- TILING: Ignore this. Related to generating images of size > 512x512.
- HIGHRES.FIX: Extra AI processing for images of size > 512x512. This increases quality of these larger images in exchange for a significant (min. 2x) increase in processing time.
- Start with it off. I like to generate prompts without it until I start seeing outputs close to what I like, then turning it on when I am generating outputs I'd like to be able to keep.
- Protip: For consistency I'd recommend leaving Firstpass width/height at 0. Start denoising strength from 0.5-0.7 to taste.
- BATCH COUNT/SIZE: Size is how many images to process simultaneously, count is how many sequential batches to run. Larger size requires more time and VRAM, but will yield multiple outputs at a time. Scales pretty evenly.
- Start with 1, don't go above 4 for both. Those with faster systems may be able to run 2-4/batch while tweaking their prompt, but generally don't go too high on these numbers until you're ready to click "Generate" and step away for a break.
- EXTRA: Ignore this. Additional parameters related to random number generation.
For further reading, start with the official WebUI wiki. It describes most of the functionality you will ever need from it.