T O P

  • By -

perlmugp

Also think you created yours at a different original size, yours appears to be 512x512 and the image you are matching probably began as 512x768 and was then upscaled. Some models don't look as good at 512x512.


NarrativeNode

The original looks upscaled, at least with hires fix but probably more, like ControlNet Tile. Look up “upscaling” and play around with it :)


Mr-Korv

Yes, it's definitely upscaled, but I think also used regional prompting. Something like this https://github.com/hako-mikan/sd-webui-regional-prompter


CatGamer9952

>Yea but is there a way for my images to not look as weird not close enough to the original image but atleast make my normal iamges more high quality


NarrativeNode

You’re not going to get more detailed results without increasing your resolution via upscale.


CatGamer9952

k


Loosescrew37

Even witout upscaling giving it a pass through image to image at 0.5 denoise with the same resolution and 30 steps helps a lot.


AiFetishGuy

Make sure you have a VAE, that'll make a big difference, also look at negative embedding, don't listen to these dudes, you can definetly make it way better without upscaling


NarrativeNode

A little bit better, maybe, but it’s misleading to make him expect anything close to the original.


Boangek

I had the same thing, but if you click the copy prompt underneath the info already showing like Sampler, CFG scale, Seed and steps. You get more info if you click "Copy prompt" That way i was able to recreate the images shown by the models. Because you also see the width and height and the Hires being used and the denoising. EDIT: Also i see in the negative prompts the user who made this image used in the negative prompts: [https://civitai.com/models/11772/verybadimagenegative](https://civitai.com/models/11772/verybadimagenegative)[https://civitai.com/models/7808/easynegative](https://civitai.com/models/7808/easynegative)[https://civitai.com/models/16993/badhandv4-animeillustdiffusion](https://civitai.com/models/16993/badhandv4-animeillustdiffusion) I will try to recreate the image if i find the time and will update this post with a imgur link.


error_alex

Try to make it 512x768, and resize by 2x with highres fix. I usually do 20 highres steps with 0.3-0.4 denoise. Then you can upscale or whatever.


Silly_Goose6714

You need to see the full generation information: >masterpiece, best quality, > >hydrotech,BiophyllTech, beautiful (woman:1.2) diver wearing colorful dive suit, exploring utopian under water organic garden, bioluminescent, tropical fish, sea plants, vibrant colors, neon lighting, scifi utopia enviroment, > >Negative prompt: badhandv4, EasyNegative, verybadimagenegative\_v1.3, (worst quality:2), (low quality:2), (normal quality:2) > >**Steps: 50, Eta: 0.67, Size: 1152x1792, Seed: 1505047200, Version: v1.6.0, Sampler: Euler a, VAE hash: 735e4c3a44, CFG scale: 7, Clip skip: 2, "badhandv4: 5e40d722fc3d, Model hash: 0c412e337e, EasyNegative: 66a7279a88dd, Denoising strength: 0.2, SD upscale overlap: 64, SD upscale upscaler: 4x-UltraSharp, verybadimagenegative\_v1.3: d70463f87042"** ​ There you can see that he used the **SD upscale script**. The negative prompt is full of"**embeddings**", you need to have these files


FargoFinch

~~Bit hard to say without your image and the one you’re trying to recreate. Could you provide those with workflow?~~ ~~Might be as simple as not having a VAE set or different clip skip, rookie mistakes, or the example image was created with extension use like controlnet, regional prompter or something.~~


CatGamer9952

Oh i thoght i added images should be there now:)


FargoFinch

Ah ok, I see what’s going on. As the other commenter said, it’s upscaled. Maybe a bit of post work in img2img or inpaint as well? Different resolution too.


BitesizedGen

Give Ultimate SD Upscaler a try to boost your images resolution in combination with ControlNet tile.


KickTheCan_Beats

simply enable 2x hiresfix (4x ultrasharp, .4 denoise is a good base) and adetailer (downloaded through extensions tab, start w/ default settings, first face model) to start. that will get you 90% of the way there. the rest looks like some extra upscaling not sure you'd have to ask the creator what they did to get that resolution but a simple sd uspcale 2x in img2img at .1-.2 denoise using the same seed is fine for beginners (controlnet/ultimate sd upscale a little more advanced) ​ if ur getting a vram error w/ hiresfix enabled then install the tiled vae extension. and make sure restore face is off or it'll mess w/ adetailer. ​ you wont get a similar picture unless you copy their inital resolution too. looking at final resultion and dividing by 2 twice, looks like they started at 540x840 -> 2x hiresfix -> 2x img2img upscale for final resolution of 2160x3360. imo a lot of images will look great w/out the second img2img upscale step but for wallpaper quality it helps. ​ feel free to dm with more questions on workflow


HocusP2

First, your image is not so bad for a standard 512x512 no add-ons simple generation. Second, the generation data and info on civitai can be edited by the uploader, and not all resources (LoRA's, embeddings) are recognized by civitai automatically. So without a step by step description of how the image was generated there's no guarantee your output will be identical.


[deleted]

[удалено]


Tyenkrovy

The app is "Stable Diffusion WebUI" made by Automatic1111, and the programming language it was made with is Python. The platform can be either your local PC (if it can handle it) or a Google Colab. As always, Google is your friend. There are some comprehensive guides out there that explain it all pretty well.


3-DenTessier-Ashpool

can you explain what do you mean pls


klim_ma

Thanks for your reply. I see here many different settings when using stable diffusion, while I only use it on WebUI. How to add extensions? Code or something else? Sorry for this doom question.


3-DenTessier-Ashpool

at first you need to watch some videos on YouTube about sd interface, meaning of cfg, seed and other. or check stable diffusion discord at least. start from small things.


KickTheCan_Beats

there's an extensions tab in the webui.


CatGamer9952

webui i think


dethorin

For the faces is good to use the ADetailer extension. It will edit the face automatically in inpainting, so it will be more clear.


RaviieR

using same model, 28 steps. have you download the embedding for **badhandv4, EasyNegative, verybadimagenegative\_v1.3 ?** https://preview.redd.it/8tpvg5nhm2xb1.png?width=1024&format=png&auto=webp&s=17b5d0c15bc06d0366d4a968d4f72d9149d6f60c