Welcome to Hassans Page!
you can also donate crypto to
0x9efd05EdC97155C66C80AB9A7EFE8C1fa13dBC3f eth address
- Latest Setup
- HassanBlend Model Finetune Updates
- Latest Patreon Posts
- Photorealistic Tips
- Settings I use
Apr 22nd - 2023-
Mar 15th - 2023-
Mar 13th - 2023-
Mar 12th - 2023-
Feb 14th - 2023-
Feb 5th - 2023-
Jan 20th - 2023-
Jan 18th - 2023-
Jan 17th - 2023-
Jan 16th - 2023-
Jan 15th - 2023-
Jan 12th - 2023-
Jan 6th - 2023-
Jan 4th - 2023-
Jan 3rd - 2023-
Dec 23rd - 2022-
Dec 22nd - 2022-
Dec 19th - 2022-
Dec 15th - 2022-
Dec 14th - 2022-
Dec 13th - 2022-
Dec 8th - 2022-
Latest blog post here regarding my training of SD2.1
Training is pretty complete, in testing now, examples on the post shared. Patreons will gain early access first then it will release free to all
Patreon members now will receive custom embeddings / hypernetworks and models that I release
- Exclusive content that won't be public at various tiers and steps per tier
- Consultation available
If you don't have access to Patreon or don't want to use it, I also post the exact same content to my Ko-fi Shop for members
There will still be public releases but the content will be different for public
Recent Patreon releases
New Hassan Fantasy Style Model released to Patreon early access:
Post here with instructions
Use our discord bot to generate NSFW images using HassanBlend models. This is patreon only members right now while it's in test mode while we determine running costs etc.
1: Go to our discord server https://discord.gg/sdmodelers
2: Go to the #sdbot channel
3: Type /draw and choose all your options
Summary of the hypernetworks I've released below and how they look on HassanBlend1.5 being released
View here separately : https://i.imgur.com/ZzmEJdG.jpg
|How to use ControlNet in Stable Diffusion WebUI||patreon link - KoFi link||SD 1.5|
|Billy Eilish Embedding||patreon link - KoFi link||SD 1.5|
|Kate Upton Embedding (Free)||patreon link - KoFi link - Civitai link||SD 1.5|
|Rihanna Embedding||patreon link - KoFi link||SD 1.5|
|Holly Willoughby Embedding||patreon link - KoFi link||SD 1.5|
|Emilia Clark Embedding||patreon link - KoFi link||SD 1.5|
|Hassan New Skin Enhancer Hypernetwork - recommend 0.5 strength||patreon link - KoFi link||SD 1.5|
|Chav Girls Hypernetwork - Make a normal girl into a ChavGirl||patreon link - KoFi link||SD 1.5|
|Long Exposure Landscape Hypernetwork||patreon link - KoFi link||SD 1.5|
|Carrie Anne Moss Embedding (Trinity from Matrix)||patreon link - KoFi link||SD 1.5|
|Hassan School - Written Guides and Voiceover Videos||patreon link||N/A - Lower tier gain access to my written guides covering basic setup, txt2img img2img, extensions hypernetworks etc, higher tier gain access to my approach to trainig, creating a full AI character with consistency, video walkthrough for each of the processes I go through|
|Jennifer Aniston Embedding||patreon link - KoFi link||SD 1.5|
|Create Fake AI Consistent Person Guide||All in one guide from beginner to advanced creation of a photoreal persona,can be used on socials. T.I,HN's,Dreambooth+Finetune training ,1x1 support provided, written documents and will provide custom made video's if needed . Access will be added to a drive location once this tier is purchased, the following documents cover nearly it all but we will be sharing videos also||Link here on Patreon||n/a|
|Tom Holland Embedding (Free)||patreon link - KoFi link||SD 1.5|
|Mila Kunis Embedding||patreon link - KoFi link||SD 1.5|
|Megan Fox Embedding||patreon link - KoFi link||SD1.5|
|Drew Barrymore Embeddings||patreon link - KoFi link||SD 1.5|
|Hayden Panettiere Embeddings||patreon link - KoFi link||SD 1.5|
|Nude Women poses Hypernetwork (Adds more volumpteous figures and more accurate anatomy) Based off over 1k images of professional and amateur photos||patreon link - KoFi link||SD 1.5|
|SFW Womens face portraits Hypernetwork Based on over 1k images of professional portraits Changes are very subtle but can be noticed around hair details, eyes, mouth etc||patreon link - KoFi link||SD 1.5|
Comparing new SDE samplers with the Hayden Panettiere embedding vs no embedding and standard HassanBlend1.4 I released, combined with the Nude Women Poses hypernetwork
Hassans EYE Enhancer Hypernetwork
18,500 steps, closeups of eyes and eyebrows and skin around the eyes
It's good at turning images into more photorealistic
These are patreon /kofi only and are available at a Supershot Supporter levelPatreon Kofi
View album of samples here: https://imgur.com/a/trsTmlc
Hassans Face Enhancer Hypernetwork
60k steps and 45k steps, trained on 3000 images of closeup portraits, close up eyes, close up skin and hair details
It's good at turning images into more photorealistic if they contain a female.
Check out the samples below, paying attention to the skin hair details and the lighting/shadows
These are patreon /kofi only and are available at a Supershot Supporter levelPatreon Kofi
Sample 60k steps:
Sample 45k steps
These are all txt2img examples, no cherry picking, no post editing or img2img etc. Plain using a prompt with our embedding
|SD Base||Automatics1111 web UI|
|Model||HassansBlend184.108.40.206 - new as primary|
|Hypernetwork||Female Posing hypernetwork - exclusive|
This model is SD1.5 finetuned with a few thousand fantasy / sci style images and then NSFW content included to sweeten the balance. The images are 768px resolution.
Was originally in early access to patreons, now released free
HuggingFace for all versions: https://huggingface.co/hassanblend/Hassan-Fantasy
Civitai link for all versions: https://civitai.com/models/19988/hassan-fantasy-fantasyai
View samples here: https://imgur.com/a/RZSvx3a
This model is HB1.4 finetuned with around 5k additional images across multiple NSFW datasets, then additional content merged together to make it a sweet mix of all. The NSFW content trained into it includes both male and female, various poses, scenes, types of shots and anatomy both up close and in various positions.
Use Clip Skip 1 with this version and you also need this SD Vae
HuggingFace for all versions: https://huggingface.co/hassanblend/HassanBlend1.5/tree/main
Civitai link for all versions: https://civitai.com/models/1173/hassanblend-all-versions
This model isn't perfect, has it's own flaws but the goal was to continue focusing on photorealistic but still allowing additional creative outputs with the additional hardcore models etc. Version 1,2 has been removed due to some issues with unexpected results when generating adults. Version 1.3 is below
Please use clip skip 1 with this model for best results
1.4 has a resulting hash of
[4cf12f5d], merged with other models along with being finetuned with my own datasets
Civitai (all version even before 1.4, safetensor included for latest 1.4 release): https://civitai.com/models/1173/hassanblend-all-versions - Rate and comment your example outputs!
huggingface: https://huggingface.co/hassanblend/hassanblend1.4 - Give the model a "Like" if you can!
GDrive = https://drive.google.com/file/d/1tNW-OH3ATGHulBLoBazW0zmAj_kdjjaT/view?usp=drivesdk
View the sample images generated from 1.4 here: https://imgur.com/a/hVZAJl4
Download the sample images with metadata attached: https://mega.nz/file/tDJDGDAY#oxqImbvU5DPj11zQCEUMWrE6wMxwlGLIbiFPGoQVmXA
View sample albums from this 1.3 model
Some keywords to trigger the NSFW portion of this model:
hot female fitness influencer is spreading her legs with her legs spread, ((cock)),spread pussy, anal tentacle fucking
anal tentacle sex
dark blue hair
long black hair
long brown hair
many neon pink tentacles
naked black woman
nude black woman
oral tentacle sex
purple-skinned fit woman
short black hair
short brown hair
tentacle anal sex
tentacle double pentration
vaginal tentacle penetration
vaginal tentacle sex
woman staring at the camera
As IMGUR strips the metadata, feel free to download the zip of all these images so you can inspect the prompts and settings in your webUI
Samples and prompts for these are also in the Prompts section below
As some of these samples are using hypernetworks, I've also linked the hypernetworks in a zip file down in the Hypernetworks section
Previously used a blend of models merged, which is a combination of the following:
Which has a resulting hash of
I used SD1.5 in place of the other berrymix which used SD1.4
uploaded the model for this merge i made:
Sample outputs: https://imgur.com/a/7GLGYfe
|Sample||Positive Prompt||Negative Prompt||Model|
Tips for photorealistic images
When it comes to photorealistic prompting, think of how you need to direct someone to create a masterpiece , such as a photographer setting up for a shot with a client, or a commissioned artist to do a painting of a client. Start by thinking of the first stage, is it a photo? is it a poster? is it a flyer? Then start thinking about going a level deeper still at the beginning of your prompt: What type of photo is it? Is it a portrait/landscape shot? Is it a RAW photo or is this for Instagram/VSCO etc? Without throwing too much at the beginning you can start by a simple direction as if you are explaining to your photographer assistant it to setup for that photo now.
If you are a photographer setting a scene for a photoshoot or a specific shot, you will think methodically about the process you need to go through:
I usually try to tell it the most important things first, the "requirements" for this job is to have
Professional close shot of PersonX. As the AI doesn't know person X, describe them.... what ethnicity are they, what stance / pose , what expression is on their face and setting/environment are they in? What time of day or night is it. Then start to think a little more granular, what hair style they have, describe the skin texture they have,
Grainy skin? Acne? , what
clothing do they have?
Then move on to the surrounding elements of your shot, what lighting is coming through, with all this in the shot what are type of focus are you using? You can go as granular as using specific
lens/arperture/fstop /shutterspeed if you want to get a specific example. I tend to mix it a bit to get results that I can do in batch as I don't like any edits or inpaints after if I can avoid it.
Sometimes a negative prompt can do more damage than good but there can be benefits. You can have anti makeup/anti airbrushing types of negative prompts to reduce the effect of anime models. If you don't want a close up or portrait you can put those into your negative prompts.
High detail RAW color photo professional nude close photograph of a female \_\_ethnicities\_\_ warrior(( woman standing)), in a ((cyberpunk city)),((night)), natural breasts, sexy look, \_\_hair\_style__, skin pores, sexual, matte, pastel colors, backlighting, depth of field, natural lighting, hard focus, film grain, (3d), ray traced, rendered, photographed with a \_\_camera\_\_, by \_\_photographers\_\_|
((morning)), ((day)),High pass filter, airbrush,portrait,zoomed, soft light, smooth skin,closeup, Anime, fake, cartoon, deformed, extra limbs, extra fingers, mutated hands, bad anatomy, bad proportions , blind, bad eyes, ugly eyes, dead eyes, blur, vignette, out of shot, out of focus, gaussian, closeup, monochrome, grainy, noisy, text, writing, watermark, logo, oversaturation , over saturation, over shadow |
Environment and Subject details
Often people struggle to get the full head or full body or torso and head in a shot. The things that work here go back to my first tip, describe them all. If you need to see feet, describe the feet such as
nail polish/socks/footwear, legs etc. If you need to see the top of their head, put a
headband on them, or describe something that is above them in the scene such as an
air vent above them,
hanging light etc. using the
hair styles wildcard helps keep a focus on the hair too. If you need full body, run some negative prompts related to closeups such as
portrait, closeups, out of shot etc. State what needs to be visible,
Sometimes if your subject is off in the distant you may lose details, especially if it's not SD1.5 and is a lower dataset model such as a personal dreambooth or multi merged model. To combat this, I put additional weight onto the facial details in my prompt and I also up the scale of the initial render. I find that if my first render is at a higher scale even 1000x1000 I get a lot more details in far off subjects than 512x512 which is logical. So instead of only generating a low res version then upscaling it, going higher can help too.
For the environment itself, sometimes as those things come after most of the other prompt data, the focus may not be on those so adding weights to keywords in your env can help keep a focus.
(((night)))in your prompt and
(((day))) in your negative prompt.
(In a field:1.2) or
(London:1.2) etc can help keep a little focus on the env.
Use additional tools to help get you the result you want. Instead of constantly looking up the best photographers for XY or Z, I pulled a list of the most controversial photographers, best for NSFW and best all time photographers in general and stuck them in a wildcard. This gives the mystery and additional creativity that may be needed.
In the prompt examples above, I used wildcards to specific the hair style , camera, ethnicity and photographer for the render. There are some cameras that are tied to certain types of photos such as a Sony a7 III being good for subject photography, or some of the older kodaks for retro washed out styles from the 80's. Instead of prescribing what you specifically want, let the wildcards help add random flavour.
XYZ plot. A lot of us will use this, especially on new models, subjects or merges to really know what output settings are best. You may need to adjust the scale/prompt to get something specific and the XY plot helps you do that in a single flow. If I have a new merge, I craft a decent prompt that I know has worked somewhere before on similar models, then I'll run XY plot with sampler and steps first to determine which I like the most. Then I'll choose the sampler and run XY again for the Scale for that sampler. This is a refining process that you can do when you craft a new prompt to get it as decent as you can.
Photography term cheatsheet
Some external links that can help with prompting:
|https://prompthero.com/||View popular AI images and the prompts used to create them to help you get ideas|
|https://promptomania.com/stable-diffusion-prompt-builder/||Modular prompt builder, selecting elements such as style, geometry etc with a visual helper|
|https://openart.ai/promptbook||prompt book to help give you guidance on creating the perfect prompt|
Any custom DB models I've put together for specific individuals are not going to be shown here but you can go to a Modeler discord to find them and others like them
Any models that are normal, ie style based and not a specific real person can be found here on this rentry page
Any custom embeddings I've made for specific individuals will not be found here but instead can be found at a Modeler discord
The hypernetworks I use are in this zip file in case you want to replicate any samples I've made:
The korean hypernetwork sharing forum link is here but I've scraped all their hypernetwork URLS from the html and posted a pastebin for them all here: https://pastebin.com/p0F4k98y
Wildcards I made
List of professions / jobs
Marvel Characters lists
Photographers List - Pastebin refused to save due to NSFW controversial photographers in the list
You may see other wildcards in my prompts that are not made by me, these most likely can be found on this repo: https://github.com/Lopyter/stable-soup-prompts/tree/main/wildcards
I created a python script to auto remove any images from a folder that have more than on person in them by using face detection
|Sampler||I mostly will use Euler A for a fast test, then I switch to DDIM, HEUN or DPM2 Karras|
|Restore Face||I primarily use codeformer 0.8ish, bascially as low as I can get but still restoring a face|
|Aesthetic Gradients||I don't use these for photorealistic images, more for fun styling and use with personal dreambooth models|
|Eta noise seed delta||31337||this is a number added to the noise random number seed, which is the corresponding Eta noise seed delta|
|Hypernetwork||I'm using a self created hypernetwork based on womens poses, available to patreons||I'll usually put on a random hypernetwork at half strength or 0.2|
|Stop At last layers of CLIP model||1||Change to 2 when using NovelAI and hypernetworks ususally for anime/cartoon style|
|Preload images at startup||✅|
Step 1) Imagine and figure out what style you want. It can be anything.
Step 2) Acquire data, find and download as many images as you can which present said style. For each separate data model I had collected around 200-250 images.
Step 3) Prepare your training data. I would only include images which clearly display the character style I want. Resize all of your images to
512x512. I also ensured that there was nothing in the picture other than the character I wanted. Example is that if there is a dog or cat in the picture along with the person and I am unable to properly paint the animal out, then I won't use said picture. Ensure that your data set has plenty of pictures from closeup to standing. For each separate data model, after refining images I usually had many less than what I started with. If I started with 250 images in my
Training Data/ExampleModel1/ folder then I would end up with around 165 refined images.
Step 4) It's dreambooth time.
File -> Save a Copy in Drive
click on the run icon to execute the commands, follow any and all instructions. For the "Settings and run" portion I use stable-diffusion-v1-5 and that works.
Example is if you want say picasso it would be:
Instance_Prompt if I want Picasso it would be:
"instance_prompt": "picasso style","class_prompt": "picasso","instance_data_dir": "/content/data/picasso","class_data_dir": "/content/data/picasso style"
In the execute the dreambooth portion, before you start training be sure to change
When it comes to
max_train_steps I go by the rule of taking the amount of refined images and multiplying by 100. If I have 165 images I will train it for 16,500 steps. You could always try for 20,000, just be careful not to overtrain.
Step 5) Art time. Ensure that you have a good prompt and then add "
picasso style" or what you called your data model to ensure that it works. My usual workflow has me either drawing how the post of the person in the image will look and running it through img2img. I have been testing with Blender 3d and utilizing a mannequin to do this too. Good part of that is you can more easily keep a consistent design. No picture usually comes out good, I will go in Krita and fix things that annoy me about the pictures. Usually the AI is bad at eyes and hands so I will paint in my own and then run it through inpainting until I get the results I want.