/ais/ - Artificial Intelligence Tools

"In the Future, Entertainment will be Randomly Generated" - some Christian Zucchini

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 8001

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Ghost Screen
Don't forget the global announcement this week
Saturday Evening


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Use this board to discuss anything about the current and future state of AI and Neural Network based tools, and to creatively express yourself with them. For more technical questions, also consider visiting our sister board about Technology

(422.59 KB 512x512 line of stars.png)

AI Art / Stable Diffusion Thread #3 Anonymous 11/13/2022 (Sun) 12:38:36 Id: 694587 No. 3
Copied from previous OP. Here's some guides to installing Stable Diffusion (if you have a NVIDIA GPU with 2GB+ VRAM). >Installing AUTOMATIC1111, the most feature-rich and commonly-used UI: https://rentry.org/voldy >Different UI with 1-click installation: https://github.com/cmdr2/stable-diffusion-ui >Installing various UIs in Docker: https://github.com/AbdBarho/stable-diffusion-webui-docker >AMD isn't well-supported. If you're running Linux there's a way to install AUTOMATIC1111 but otherwise features are very limited: Native: https://rentry.org/sd-nativeisekaitoo Docker: Copied from previous OP. Here's some guides to installing Stable Diffusion (if you have a NVIDIA GPU with 2GB+ VRAM). >Installing AUTOMATIC1111, the most feature-rich and commonly-used UI: https://rentry.org/voldy >Different UI with 1-click installation: https://github.com/cmdr2/stable-diffusion-ui >Installing various UIs in Docker: https://github.com/AbdBarho/stable-diffusion-webui-docker >AMD isn't well-supported. If you're running Linux there's a way to install AUTOMATIC1111 but otherwise features are very limited: Native: https://rentry.org/sd-nativeisekaitoo Docker: https://rentry.org/sdamd Onnx: https://rentry.org/ayymd-stable-diffustion-v1_4-guide To try it out without installing anything you can use a web service, though they are lacking in features. >Stable Horde is a free network of people donating their GPU time, how long it takes depends on how many people are using it. It supports negative prompts, seperate the positive and negative prompts with "###": https://aqualxx.github.io/stable-ui/ >Dreamstudio requires making an account and gives you around 200 free images before asking you to pay: https://beta.dreamstudio.ai/ Img2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest | https://dezgo.com/image2image Ippainting: https://huggingface.co/spaces/fffiloni/stable-diffusion-inpainting | https://inpainter.vercel.app/paint >The most notable non-Stable Diffusion generator is Midjourney, which tends to be nicer-looking and doesn't require as much fiddling with prompts but can have a samey style. However the only way to use it is through a Discord bot. https://www.midjourney.com/home/ >If you're okay with more setup but don't have the hardware you can use a cloud-hosted install: Paperspace: https://rentry.org/865dy Colab: https://colab.research.google.com/drive/1kw3egmSn-KgWsikYvOMjJkVDsPLjEMzl >Various other guides: NovelAi: https://rentry.org/sdg_FAQ Dreambooth: https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth Inpainting/Outpainting: https://rentry.org/drfar Upscaling images: https://rentry.org/sdupscale Textual inversion: https://rentry.org/textard >Resources Search images and get ideas for prompts, can search by image to see similar images: https://lexica.art/ Index of various resources: https://pharmapsychotic.com/tools.html Artist styles: https://rentry.org/artists_sd-v1-4 | https://www.urania.ai/top-sd-artists | https://rentry.org/anime_and_titties | https://sgreens.notion.site/sgreens/4ca6f4e229e24da6845b6d49e6b08ae7 | https://proximacentaurib.notion.site/e28a4f8d97724f14a784a538b8589e7d Compiled list of various models, regularly updated: https://rentry.org/sdmodels (Check here before asking "where do I find x model!") Link to the copypasta for this OP: https://rentry.org/AI-Art-Copypata
(156.91 KB 576x576 the scrunkly.png)

Do you spare him?
(79.40 KB 300x293 ....jpg)

>>4 I fuck that cunt up.
(991.46 KB 1016x1440 Request Anchor Girl II.jpg)

REQUEST ANCHOR
(1.36 MB 1080x1440 Delivery Anchor Girl II.jpg)

DELIVERY ANCHOR
(533.70 KB 640x553 towers.png)

(493.53 KB 640x640 world in a ribbon.png)

A surprisingly slow start to the thread. Here are two particularly neat I generated.
(143.94 KB 1920x995 a.png)

I seem to be a big dumb-dumb, how do i do this?
>>9 You have to format it correctly. Enter /imagine prompt: apple and tomato. It will take a minute or so to come back with the results.
(88.10 KB 500x738 200% mad.jpg)

(4.25 MB 800x800 angry junko.gif)


(294.53 KB 730x794 mario.png)

>>6 As many angry images of videogame characters as you can make.
>>5 lewd
>>6 >anime girl wearing toe socks. Well, that didn't work so well. Can you do anything better?
(23.98 KB 477x456 costanza jesus.jpg)

(457.17 KB 600x450 Constanza Bat.png)

>>733733 >niche homosex >AI art >porn lol >>6 Various characters doing The Costanza Bat meme pose, ala this. As many as possible. We need more.
>>733733 >niche homosex would push it into full e621 territory. The stuff the AI draws is better than most of e621, so it's fine.
>>15 >First image >Plates over her nipples >That plate on the left by her pussy Did the AI really do something as contextual as the implication that her pussy was covered in a now removed metal plate, or did you inpaint that manually? I'm really liking the aesthetic of this furshit. Saved the first two pics.
>>15 Seriously, what prompts and program did you use for this?
(513.65 KB 576x576 rocky waters.png)

>>18 His prompts are in the filename, the program is Stable Diffusion and the exact model is probably Yiffy-e18.
>>18 I forgot the metadata that stores prompts in the images gets stripped here. >program Stable Diffusion. I just followed the Voldy guide linked in the OP, plus the AMD one. >prompts For the first one: e621, explict content, ((by photonoko)), (by ruaidri), by john william waterhouse, by bouguereau, detailed head, eye contact, cute young female anthro (anubian jackal), intricate detail furry texture, (flat chested), looking at viewer, reclining, spread pussy, solo, high res, (black body), egyptian Negative prompt: bad anatomy, disfigured, deformed, malformed, mutant, monstrous, ugly, gross, disgusting, blurry, poorly drawn, extra limbs, extra fingers, missing limbs, amputee 50 steps of DDIM at CFG 10, using the Yiffy-e18 model. The second one replaces "explict" with "questionable", "reclining" with "standing", and adds "sky, loincloth" at the end instead of "black body". The third has various changes to make it male/male duo, and of course even more changes for inpainting the leopards, but the artists plus "intricate detail furry texture" are what give it that style, unless you use a sampler like Euler that smooths it out. The other two were DPM++ 2M Karras instead of DDIM. Also 50 steps, but I think that may be more than that sampler usually needs. The negative prompt may be placebo. I've never done proper A/B testing to see what actually helps. Note the spelling error in "explict" is intentional, since the guy who trained yiffy made the typo in the training data.
>>6 Requesting real images of and around Pripyat but with an emission going on, plus whatever other fucked up insane shit you can in the sky
(164.70 KB 1615x787 Untitled.png)

>>3 >Installing AUTOMATIC1111, the most feature-rich and commonly-used UI: Step 2: Clone the WebUI repo to your desired location: -Right-click anywhere and select 'Git Bash here' -Enter git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui (Note: to update, all you need to do is is type git pull within the newly made webui folder) What? Where the hell am I supposed to "Right-click anywhere"?
(117.17 KB 836x596 micromamba.png)

>>22 >Different UI with 1-click installation Keep getting this message and it saying error couldn't install packages.
>>22 This is a joke, right?
(317.54 KB 704x512 00000-3762029787.png)

(390.30 KB 704x512 00021-2610805274.png)

(351.04 KB 704x512 00007-63732972.png)

(378.65 KB 704x512 00009-1821146591.png)

(568.98 KB 704x512 00021-3834245786.png)

(433.12 KB 704x512 00011-3103927082.png)

(317.65 KB 704x512 00017-1104780576.png)

(471.75 KB 704x512 00047-2583846472.png)

(439.32 KB 704x512 00008-3662766368.png)

(349.78 KB 704x512 00009-590091990.png)

>>25 I expected Sonic to be a blob monster but I didn't expect Majin Wario.
>>22 Right click anywhere in.... No You are way too retarded to help. If you can't parse normal human instructions there is no way you will be able to figure out how write prompts anyway.
>>8 Oh fuck! It's Ozma!
More to come.
(452.66 KB 704x512 00020-588849523.png)

(421.69 KB 704x512 00001-853695998.png)

(375.06 KB 704x512 00001-1653699100.png)

(450.05 KB 704x512 00006-2190078251.png)

(390.23 KB 704x512 00004-2199774361.png)

PUZZLE TIME
>>30 EZ, stressed out Squidward.
>>30 You should have left out #3. That one made it too obvious and didn't really look like anything else.
(94.15 KB 325x244 1514492324.gif)

(529.92 KB 832x640 00003-1061550847.png)

If you got that one this fast, then this one should be easy.
>>34 LINK MY BOY
(222.58 KB 512x576 00005-1275405173.png)

If you get this one just as fast I'm gonna have to make them a whole lot more esoteric.
(9.21 KB 426x201 git-bash-here-right-click.png)

(679.60 KB 720x370 spoonfeeding.webm)

>>22 Installing git adds an option to your right-click menu in Windows Explorer. Open any folder where you want to put the WEB UI, right click inside the window, choose git bash here from the menu, then copy-paste the command into the git bash: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui Then hit enter to run this command which will download that git into wherever you started the git bash from. If nothing appears in the folder you can alternatively specify a subfolder in the command: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui done then it should download to a folder titled "done" From there you need to run the web ui .bat file to download all the shit for Stable Diffusion. It can take a long time and may even look stuck so just wait it out. I don't think A111 Web UI comes with an AI model so you will also need to get one (such as sd-v1.5.ckpt or anime-full-pruned.ckpt which is Novel AI) and put that ckpt file into models/stable-diffusion otherwise the launcher thing will just tell you there is no model and won't do anything. If you manage to get as far as loading the WEB UI, including some model.ckpt, then you can put the web address it tells you into any web browser to get into Automatic1111's Web UI and start using the model you have loaded.
>>35 Oh yeah, forgot to mention you're correct.
(377.09 KB 512x512 download (33).png)

(912.30 KB 800x800 download (9).png)

(493.65 KB 600x600 download (21).png)

(335.82 KB 512x512 download (54).png)

(418.36 KB 512x512 out-2 (3).png)

THIIIIIIIIIIIIIIIIICK
>>39 >that background on the last one Is that you, guy who is creating gigamaidens?
>>36 I have no mouth and I must scream?
>>30 You only should have posted the second and the last one to start with. The others are too clear.
>>39 That last one is almost not an abomination.
(267.04 KB 512x576 00021-2062835459.png)

>>41 Incorrect. The layout and placement of the text should give it away.
>>44 Due Sex.
(21.96 KB 207x239 1372651038679.png)

>>45 CORRECT
>>46 Obvious in hindsight, but for some reason >>36 had me imagining the face pointing the other direction.
(181.31 KB 512x512 00001-15900593 (3).jpg)

(181.54 KB 512x512 00003-29805400 (15).jpg)

(160.53 KB 512x512 00003-73463804 (2).jpg)

(184.37 KB 512x512 00003-51515857 (2).jpg)

(161.15 KB 512x512 00004-59954042 (2).jpg)

>>39 EVEN THICKER NIGHTMARE FUEL!!!!!
>>6 Requesting loli clowns.
(639.27 KB 512x768 00004-1624378239.png)

(590.13 KB 512x768 00005-62534136.png)

HERE COMES A NEW PUZZLE!
(297.72 KB 2271x2380 spicymeatball.png)

>>49 >requesting loli >in the ai art thread
>>49 Nice try FBI-kun, but I'm not getting on a list. Do it yourself
(5.47 MB 1792x2560 BGY7IqVOJNBiYkq8oHYI.png)

>>53 >having a mind so ruled by glowniggers that you're too terrified to use AI to make lolis pathetic.
>>54 There's a time and a place, anon. And that place is >>>/delicious/28633 >>>/loli/4000
>>55 >that place is on this site wrong.
>>54 SFW BOARD DIPSHIT FUCK.
(813.89 KB 960x704 00034-2689273682.png)

(822.27 KB 960x704 00026-4066870629.png)

(856.83 KB 960x704 00045-4079150956.png)

(768.43 KB 960x704 00062-940507694.png)

(833.65 KB 960x704 00084-2536290074.png)

Puzzle: What image did I derive these from? Subject matter may give it away.
(798.43 KB 960x704 00005-1685273533.png)

(809.12 KB 960x704 00057-1276424358.png)

(824.40 KB 960x704 00120-631794195.png)

(754.74 KB 960x704 00042-1987589662.png)

(783.97 KB 960x704 00140-2493127865.png)

(838.40 KB 960x704 00026-4002562666.png)

>>58 >>59 Creating two girls with high denoising strength results in images that are maybe too difficult. With one girl it becomes easier.
>>52 /loli/ has a thread too. >>>/loli/4000
>>55 >There's a time and a place, anon Yeah, here, where all the people are and where there's a request anchor.
(470.18 KB 512x414 tmpqmvbfxfy.png)

(352.54 KB 512x512 tmpsfufn9k9.png)

(241.70 KB 512x512 tmpna2d7p1s.png)

(344.20 KB 512x512 unimpressed katia.png)

>>6 katia managan doing anything, i want to see what other people get with their checkpoint mixtures
>>54 Eyes look like she's staring out over a city at night. Completely inappropriate for her surroundings.
(1.00 MB 832x960 00041-3538723411.png)

(980.61 KB 832x960 00048-1786647891.png)

(1.03 MB 832x960 00097-4112555897.png)

(1010.43 KB 832x960 00034-1100141353.png)

>>49 >>7 The AI really wants to make nightmare fuel and weird lips when you try to generate clown girls. Wound up putting (clown nose) in the positive prompt and other clown features into the negative prompt.
>>65 Thanks anon. >belly button eye It's a spoopy kind of honk.
>>65 You're veering close into unironic pedo territory with some of shit man Isn't there a global rule against realistic CG generated kids?
>>21 Supporting this, blowouts and emissions are soothing
(302.28 KB 512x704 red skies.png)

>>50 doot
>>54 Pedos and glowniggers both get turned into windchimes.
>>64 She staring into the primordial current, anon. The sky and the cosmos are one, you know. >>54 It would be nice to have a redo or an edit with golden light reflected in her eyes though.
>>66 I was picturing circus acrobats with clown aesthetic. The AI had other things in mind. >>67 Berrymix using Euler A or DPM++ 2M sampling. I saved the prompt, but there's a lot of superfluous junk in there since I building from previous prompts that were built on previous prompts. Which gets cluttered. Positive Prompt: masterpiece, best quality, vintage dry acrylic painting, 1girl, cute, (clown nose:1.4), (leotard:1.1), (frills, bows, black choker, twintails), (hair ribbon, blue hair, very long hair), (white skin:1.2), (red lipstick, pouty lips, detailed pupils, green eyes), ((loli, big head, short legs, flat chest, narrow waist)), Boris Vallejo, Adonna Khare, Ed Binkley, Jean Veber, high art, beautiful, simple background, ((looking at viewer)), (arms behind back), small mouth, Negative Prompt: lowres, (((bad anatomy))), bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, depth of field, large breasts, medium breasts, (animal), mole, teeth, shoes, obese, ugly, (group, crowd, 2girls), cum, cum on clothes, cum on body, large mouth, bald, clown lips, clown mouth, clown, nipples, earrings, topless, nude, I've heard you don't have to autistically separate tags, but I've encountered problems when I don't so I'm still doing it. Example: using (very long wavy blue hair) instead of (very long hair, wavy hair, blue hair),
>>58 >>59 This is not another Pepe variant is it? The forbidden love girl
>>68 It's sorta subjective. Passes my squint test, but someone else might come along and fail it.
(33.30 KB 600x540 IwM78gl.jpg)

>>74 This thingy
(109.25 KB 1259x705 forbidden love.jpg)

(762.06 KB 960x704 00119-2780401023.png)

(770.30 KB 960x704 00112-55649337.png)

(744.25 KB 960x704 00100-175051689.png)

(773.04 KB 960x704 00097-2502072828.png)

>>74 >>76 Correct. That's why it's all yuri.
(332.13 KB 1280x1756 doomcover.jpg)

>>50 Fantastic rendering. What other renders you have of this?
>>68 Nah. It's clearly unrealistic. They're almost like less abstracted versions of Killer Klowns or something you would see on a 90s album cover or a skateboard. >>67 These are really cute. The first one especially.
>>77 Can you add a subtitle below those images like a meme?
>>6 Requesting a moai but as an actual human being. Or at least a humanoid or some sort.
(845.47 KB 960x704 Forbidden AI Art.png)

>>80 >>7 Out of all of them this one looks the most like the original girl IMO.
>>83 Trips checked. >>83 >>84 Good stuff.
>>68 >Isn't there a global rule against realistic CG generated kids? Yes, but it doesn't apply to these images for the following motives: – all of the images are too stylized to suppose a problem (nobody would think these creepy clowns are real) – none of the images are truly pornographic in nature (nudity ≠ pornography) – realistic 2D/3D loli/shota porn is only treated as "problematic" if it's demonstrated that the characters were designed after real children (like Sherry Birkin and a couple of other 3DCG vidya lolis, they are the only fictional characters that have been banned from the site for this reason)
>the AI is dramatically better at drawing Bowsette than normal Bowser It's completely expected, and for normal Bowser I should be using the furry model instead of the anime one, but still funny.
(1.08 MB 384x239 get in.gif)

Why do these threads always attract pedophiles? The only thread worse for this is meta. You're not in good company, fuck off. >>86 Apparently the mods disagree, 734255 just got global'd for gr6.
>>87 >for normal Bowser I should be using the furry model In retrospect maybe this isn't what I should be doing with my time after all.
>>734255 They don't violate GR2, but local volunteers are free to apply their own criteria on this kind of content. >>88 >Apparently the mods disagree, 734255 just got global'd for gr6. He was banned for a reason unrelated to the images he posted. It's a complex story.
(90.26 KB 512x704 island.png)

>>89 I think you should try making a thread on /fur/ for this kind of content, you'll find a bigger audience there for this there. I'm not sure there are many anons who want to see bara yiff.
>>81 This would be pretty easy for ai
(66.78 KB 256x208 ya gotta do.png)

(41.55 KB 300x100 1599655046699.png)

>>89 >>91 Oh yeah by all means do. We have a thread for that old thisfursonadoesnotexist thing but a brand new AI yiffe thread would be welcome. I even made a banner based on one of th AI pics from the last thread.
>>95 >lolice
>>88 Glow in the darks lurk and have been inciting this shit since forever or atleast since we got put on the "bad goys" list Just Report and filter.
>>6 Requesting this image in ai
(613.62 KB 512x768 00008-1081603848.png)

(642.39 KB 512x768 00005-1916277484.png)

(621.49 KB 512x768 00012-3134937520.png)

>>70 CORRECT >>78 Just these. Wanted to start with the sunset with the trees but that seemed too on the nose.
Imagine buying a 4090 just for this, haha just kidding... ...unless?~
>>100 You just need a NVIDIA GPU with at least 2GB of VRAM to do basic bitch text2img and img2img. Training the AI on your own dataset requires a NVIDIA card with at least 12GB of VRAM tho, which I don't see the appeal of unless you really need to make those weirdly specific lewds or are too lazy to fuck around with layers in GIMP/Photoshop.
(446.11 KB 704x512 squidje.png)

>>100 You can render images on much weaker hardware as long as you tweak it a bit. If your system actually is too weak, I can try to generate something on your behalf.


(276.91 KB 576x448 Pumpkin.png)

(788.06 KB 1600x1200 JewcyBagel.png)

>>102 I don't have any actual requests. You can do this one, I want to see what the uuuuohhh face looks like as a landscape, and kamiya in your choice of prompt. turn his mannhole into a vegetable ala nikocado you don't NEED to do the last one lol
>>101 You only need Nvidia if you're on Windows. On Linux AMD works. And I think the biggest benefit of personalized fine-tuning is to train it to emulate a specific artist. I know I have a bunch I want to get around to trying.
>>105 >emulate a specific artist. Alright that sounds like it may be worth the investment just to have a Yoji Shinkawa filter.
>>106 It's pretty neat to be able to say "give me tkmiz" and it actually works.
(354.06 KB 576x640 fuckboy kamiya.png)

(1.01 MB 1152x672 landscape.png)

>>83 I fucking love em tripsman.
>>90 >It's a complex story. Care to elaborate in the meta thread? I didn't see what the fuck he posted. I doubt it was that bad if you think it didn't violate global rules.
>>110 >I doubt it was that bad if you think it didn't violate global rules. It wasn't. It was just 5 loli ahego faces.
(642.18 KB 512x768 00005-1916277484.png)

>>99 what are you talking about, the second ones look perfect
>>112 It looks good but I think it looks too much like the original, especially in the context of this gay ass reverse Rorschach puzzle game.
(869.73 KB 704x960 00044-619826708,.png)

(1002.72 KB 704x960 00074-1373220238,.png)

(916.17 KB 704x960 00041-619826708,.png)

(1.01 MB 704x960 00048-1536467814,.png)

Peach in overalls.
(979.31 KB 704x960 00098-184996826,.png)

(916.16 KB 704x960 00112-885887087,.png)

(961.55 KB 704x960 00095-3606115786,.png)

(806.27 KB 704x960 00189-3900543863,.png)

More Peach in overalls.
>>115 >the booty on that last image
>>6 amy_sorel, drill hair, underwear_only, loli, stage
>>100 I'd like to get a 4090 just for the faster prompt gens... Takes about a minute for me on images with around 50 samples. sometimes longer if i'm using a multi-layered hypernet and a heavier sampler like dpm++ >>117 >>7 Uncertain if you wanted any sort of specific style so i kind of just did my own thing in multiple styles. Don't know how accurate these are to the character.. Also didn't fix any of the hands, if you want one of them touched up or more examples i'll make more. prompt (roughly): (masterpiece:1.33), (best quality:1.3),(ultra detailed),((facing viewer)), (soft lighting), (full body) bokeh, (solo),amy sorel performing a live concert on a stage,(gothic loli:1.1), (red roses on top of hair:1.2),(red short twin tail drill hair:1.3),(red hair),(red eyes), (short twin tails),(red drill hair:1.5), (underwear only:1.3), (lace lingerie:1.1),(fishnet thighhighs:1.2) (small breasts:1.7),(flat chest:1.1), (lace loli choker), (gorgeous detailed eyes), hyperdetailed, (detailed hands), (detailed fingers),(five fingers),(bare hands), (athletic body:1.2), thin waist, in the style of XYZ Negative prompt: ugly, bad proportions, different color furr, malformed fingers, low detail, low quality, multiple views, more than 2 ears, watermark, text, head out of frame, from behind. (multiple views:1.1), mangled fingers, lanky fingers, bad anatomy, bad hands, inhuman hands, inhuman fingers. mutated limbs, extra digits, (hands holding something:1.2), (black hair:1.2),(blue roses) Steps: 20, Sampler: Euler, CFG scale: 7, Size: 512x512, Model hash: 874fe4c5, Eta: 0, Clip skip: 2 Made these using cafe-anythingv3.0 model with the merge value of 0.25 and some TI models for the styles Kind of wish exif data wasn't removed when uploading rn cause it's the most anoying thing when trying to figure out prompts of images. It'd be cool if there was an option to keep exifdata similar to how theres an option to strip filenames and spoiler..
>>111 I want to see them though.
>>6 Requesting anything involving girls in full chastity gear (belts and bras) being lewd together (I don't mind if it's furry). By the way, I really like implication of the first pic: necklace with the word 'virgin', a carry case for a giant buttplug and an enema bag.
(29.02 MB 854x480 Untitled(1).mp4)

>>120 Kinda like this vid.
>>118 Thanks. 2, 3 and 5 are too titty demon for her, but 1, and 4 are good.
>>101 >>105 I ordered a 3060 the week before black Friday because EVGA is exiting the market and in that kind of situation its some times best to buy directly from the manufacture in the event you need file a warranty claim and because I can afford to not be thrifty for the first time in my life to teach it Bkub's charm style, Kazuma Kaneko, as well as some objects like iron maidens. Not sure if the issue is in the version 1.5, but in 1.4 every time I tried to make an iron maiden it just gave me iron maiden album covers or a weird dude with no skin.
>>13 Let me guess, you have a GTX 1050 Ti
>>123 Shit, I can teach the AI who Pepsi man is as well
(425.20 KB 512x512 tmp565zrl8h.png)

(330.71 KB 512x512 tmpe_wrxins.png)

(274.36 KB 512x512 tmp4ha_swie.png)

>>122 >2 and 4 were fine That makes sense, the styles used on those were actual loli artists. The others i had issues with trying to get flat chests on since they tend to make less flat content, i'd have to weight them real hard for the image to come out right. Heres a few more. I really like experiementing with different styles with TI.
>>98 >this fucking clip interrogation I almost don't want to with that wojak in the mix
>>127 >waporware What?
>>127 Blip > clip Blip is less prone to mis classifications but honestly this shit happens no matter what.
(4.85 MB 1280x720 miku_diffusion2.mp4)

>>129 Why dlete?
>>130 AI Miku is more expressive than the "real" one, wtf...
>>130 Interesting, and impressive. I wonder if the next big thing in anime will be AI assistance over CGI? Will the quality be any better?
>>130 I refer to my previous statement. >>731896 This isn't really impressive besides the time investment necessary to make it. >>133 > I wonder if the next big thing in anime will be AI assistance over CGI? This only looks really good because it copied frame for frame a pre-existing 3DCG animation at the lowest denoising strength. It's the equivalent of the applying the AI to an existing image, but setting it so that there's practically no changes made. Then repeating the process hundreds of times.
>>134 Large anime producers would certainly have the resources to do this not only hundreds of times but thousands of times. Plus consider that most CGI in anime are like 30 frames per second slower than everything else, it really stands out.
>>136 >blup wch >kech >BUUUUEP this AI is a fucking brapfag.
>>137 I think it might just be welsh or something/
>>134 imho It actually is pretty impressive when compared to previous models. The accuracy of the img2img result is pretty spot on and unlike others the mouth doesn't get deformed every other frame. For context he's using the newest anythingv3 model which has a lot better anantomy references in it's dataset. The results look good, my only nit pick is the 2 frames where instead of drawing miku's backside it inverter her pose making a strange double twirl effect. I get what your trying to say that it doesn't take "effort" but idgf it looks good and is only a benchmark for pureai results. Imagine what could be done with effort and on all those shitty CGI scenes in already existing anime. Honestly 3D artists have lucked out, used to be the worst part of anime but now with ai you can introduce what has been refered to by many as "artificial mistakes" (qoute by arcsystemworks)
I hope AI art make Twitter porn artists go broke.
>>141 >No, I want corporate artists to go broke. I really don't care what you want. Shit and pozzed artists should go broke. Literally cucked by AI.
(313.71 KB 512x512 00004-1509741032.png)

>>141 And as an aside, it keeps trying to add the Getty Images watermark despite negative-prompting that, but it can't quite manage it. I can only read this attempt as "Getty niggers".
>>136 Is there any explanation for the AI not capable of properly generating text?
>>144 It's generating text fairly well in some cases. It's just that it gibberish because the AI isn't actually "I". Also because they would have to add in a image-to-text parser and and also some thing like AI Dungeon to cross reference the generation keywords and to form context to generate words and phrases.
>>145 Ah, fair enough.
Is there way to have large breast but without they show any skins? It keep outputting cleavage or topless
>>147 Are you ESL? Yes, there is. Just include something like "sweater" or "jacket" in your prompt.
>>148 Yes I am What if I dont want them to wear sweater and jacket?
>>149 Well, input whatever you want them to wear. Those were just examples. You can also put things in the negative prompt, like "cleavage" or "nude". The negative prompt dictates what you don't want the AI to generate. It can take a few tries to get right, but you should get the hang of it.
>>151 What diffusion did you use?
>>6 shirogane_naoto, tomboy, reverse_trap, bottomless, briefs
>>152 BerryMix
>>140 >get into a discussion about whether neural network art is actually art >ask "Is a collage art" >never get any responses to that I've never seen people burrow their heads into the ground faster from a single string of words.
>>155 no shit, its a grey area I would argue that both are art, in that they are manifestations of creative forms of expression (though the AI is doing the work of creating it, it is still the expression of the person who inputs prompts into it until he gets the result he wants) Most people want to comment more on the quality of art (i.e. is this good art or bad art) than is it art, period
>>151 >extra leg in the second picture under her left leg
>>156 I think there's much to be said of the quality of the art too, since most of the art I've seen with AI has been pretty decent so far, even with the eldrich limb problem.
>>151 now prompt her holding my hand
>>140 I hope it makes any artist who draws blackedshit and those trying to sell off traced art go broke. Especially the (((cuckreans))). >>156 >I would argue that both are art, in that they are manifestations of creative forms of expression (though the AI is doing the work of creating it, it is still the expression of the person who inputs prompts into it until he gets the result he wants) This. This is what artfags who are butt-hurt at the whole fiasco of AI are forgetting as well. The very definition of art and what it actually could entail. I also find it funny how many of them are using the "soul" argument when they're probably the kind to reject the idea that souls exists in the first place.
>>157 Just imagine it's a brown pillow.
>>160 >I hope it makes any artist who draws blackedshit and those trying to sell off traced art go broke. Especially the (((cuckreans))). You really don't. Because if it does, that means people have used it to produce that material en masse, meaning waves upon endless waves of it will wash over the net for all eternity.
(70.19 KB 512x704 Drake tiddies.webp)

(76.47 KB 512x704 Drake tiddies2.webp)

(67.88 KB 512x704 Drake tiddies3.webp)

>>158 So far is pretty decent, and even if you get an image that could be better you can put them in the img2img blender and it fucking werks. Also enjoying my drake tiddies.
>>162 >Because if it does, that means people have used it to produce that material en masse, And the material/fetishes I like can also be made en masse as well. Either way, I won't have to look at an artist's work and see that they drew some trannies commission (some are faggots themselves) when I can just prompt things I want to see myself. These kind of things can still be filtered and won't likely be too popular anyway. People who try to make dough off of AI art shouldn't earn a dime either and they won't.
>>164 >they shouldn't see a dime I don't pay for art or porn in the first place, but I think porn artists being absolutely economically distraught because their coomer fanbase is taken by dudes who are able to generate dozens of pieces consistently with decent quality would be funny as hell.
>>165 >any -oomer other than boomer Lmao. This place is done for.
>>166 What about groomer?
>>166 >This place is done for Really? You're just noticing that there are rapefugees on here from cuckchan and twatter? Mark invited them in the first place. I've seen posters using "sus", "among us", "cope", and all the other retarded lingos.
>>167 Okay, you got me. I'll give that a pass.
(580.55 KB 896x512 00002-3690691244.png)

(456.41 KB 896x512 00000-3826446309.png)

(527.87 KB 896x512 00014-1379472044.png)

A NEW PUZZLE APPROACHES!
>>170 pepe reeing?
>>171 Incorrect. Second and last pics are a clue to the subject.
>>170 giant lady from dark souls
>>170 fuck its naked snake
>>173 I was thinking the same thing.
(431.19 KB 640x360 1364001215823.png)

>>174 CORRECT!
>>176 That was really sneaky.
(306.10 KB 512x512 00022-1446200209.png)

(323.14 KB 512x512 00021-695574356.png)

(266.94 KB 512x512 00010-204392518.png)

PUZZLE: THE SEQUEL!
>>179 That Egyptian croc kob is real nice.
>>179 >God I hope the NAI one gets leaked like the anime model did ? I have the nai trained hypernets for those though. Unless i'm mistaken.
(458.11 KB 512x768 1668627542668162.png)

(479.16 KB 640x640 1668634209729871.png)

(538.13 KB 512x768 1668633521881316.png)

(826.70 KB 768x1152 1668577733603.png)

(1009.96 KB 1280x1280 1668585859028.jpg)

>>181 No, there's an entirely new NAI model for furry specifically. Not the furry hypernets for the anime model. It was just released for their subscribers last week. I'm not going to pay a NAI subscription but the results I've seen on cuckchan's /trash/ are remarkable at a wide range of content. Even for non-furries, it's of interest because it's remarkably good at interactions between multiple characters (obviously sex being the main thing).
Adding "sketch" and "rough lines" to the prompt sort of bypasses the problem of disfigured hands, since it passes as an image someone hasn't finished drawing.
(257.30 KB 384x704 anime baron.png)

(217.64 KB 512x704 fuzzy baron.png)

Every anon is able to generate better images than me. I tried making some images using Doom sprites. The one that worked the best was the Baron of Hell, but it kept trying to make him look like either an anime fuckboy or really fluffy. I don't know which is funnier. >>182 >the resident thread furry is a cuckchanner I'm not going to make fun of you too much, a few of our drawfags also went to /trash/ when 8chan's future wasn't so certain. One I know said he tried posting there for a bit, but he didn't like the chaotic vibe of the board. Don't worry about that model, the technology is coming along so absurdly fast that in 6 months random hobbyist's models will probably surpass this. The speed at which it's all progressing reminds me of the video game industry during the 1990s.
(206.33 KB 512x704 bara baron.png)

It also sometimes turned him into a himbo, which were some of the better results. I'm sure this is somebody's dream come true, but it's not mine.
>>6 Something sexy in one of those styles that looks like crayon drawing. You know, where the brush strokes look like that if you closely at the lines. I can't find any images I have in the style.
(1.01 MB 1792x2048 gain.jpg)

>>151 When compiled together it's a reverse of loss.jpg >>157 >>161 Just shoop it into a different color.
>>188 Nice, thanks for the edit my man.
>>185 >>the resident thread furry is a cuckchanner I'm not. It's the only thread I've followed on the entire site since early in the days of old 8chan. But when it comes to finding good prompts, simple quantity of users matters. Actually though those specific threads aren't as bad as expected, but I haven't looked at anything else on the board.
(909.20 KB 704x960 00005-1965782322.png)

(1.16 MB 704x960 00021-1836587056.png)

>>187 >>7 Converting a good output into a crayon drawing took some time to figure out. >>189 No problem. Those are bretty gud outputs.
(1.02 MB 704x960 00023-895881191,.png)

>>187 >>191 There's also colored pencil which is finer than crayon. Maybe that's what you're thinking about.
>>191 With this, we can finally have omori porn
Is there a way to make the lighting look different?
>>187 Here are some more colored pencil outputs using some hypernetworks.
>>194 Try one of these: (sidelighting:1.2) (backlit:1.2) (dappled sunlight:1.2)
>>194 >>196 Volumetric lighting and cinematic lighting are also ones I've seen used. Subsurface scattering also affects the way skin looks in light.
>>195 Thanks for the tips. Also could you try and get some pixel art or PC-98 looking stuff?
>>165 >but I think porn artists being absolutely economically distraught because their coomer fanbase is taken by dudes who are able to generate dozens of pieces consistently with decent quality would be funny as hell. >coomer Cuckchan lingo, but correct. It's also the same reason why they hate No Fap November as well even though they could just ignore it. They get butt-hurt when they can't their shekels.
>>199 *get*
>>6 I'm requesting a fusion of Wario and Heihachi Mishima as it has come to me in a post shit shower.
(1.41 MB 704x960 00037-3627717797.png)

(1006.30 KB 704x960 00038-709475653.png)

(834.65 KB 704x960 00043-2339455130.png)

(867.01 KB 704x960 00044-1894750315.png)

>>198 checked It doesn't recognize PC-98. (pixel art) results are not good. (sprite art) has a better chance of getting something decent especially if you put some stuff into the negative prompt for quality, but it's still blurry not razor sharp pixel art that you could resize with nearest neighbor.
(523.47 KB 704x1024 Wario - Heihachi Mishima.png)

>>201 >>7 Getting it to draw Wario's face was a pain.
>>736798 >>736801 Don't know what you're using, but you should probably reinstall it from scratch only keeping the model to copy over.
>>203 Exactly what I imagined in my brain space, thank you.
>>191 >>192 >>195 Fucken A+, man.
(2.04 MB 1920x1408 Request Anchor Dog.jpg)

(1.86 MB 1920x1408 Delivery Anchor Cat.jpg)

>>6 >>7 Fished out more anchors from the depths of Stable Diffusion.
(17.70 KB 286x323 I Pity the Dead.jpg)

>>207 cute
>>207 I want a doggo ;_;
>>207 I do enjoy ones that aren't just porn.
(2.04 MB 1792x1408 Anime Girl in Toe Socks.jpg)

>>13 >>7 Need IMG to IMG to get anywhere with that.
(1.70 MB 2176x1408 Pripyat Blowout.jpg)

(1.30 MB 1260x1600 Moai Guy.jpg)

>>213 Thanks anon I like it.
>>213 10/10
>>213 This is literally something reddit or artstation would ask generate just to show off to twitter. But then again good shit.
>>216 >If it's not lewd or politically incorrect in some way, it must be reddit! Nigger are you retarded?
(884.48 KB 1025x1400 Katia Managan.jpg)

>>63 >>7 For such a simple design trying to generate it is torture since the AI doesn't know her.
>>737020 >I just used the oneclick version There's your problem. Install the good one so we can help you figure out why its not working.
>>6 Requesting sneaky images based on daily dose and moonman.
>>220 I saw a good daily dose as a smokestack with green and purple smoke that was what prompted me to make the guessing game
>>39 are you literally using pictures of balloons in img to img to get the gigantic bazongas? Modern problems, modern solutions.
(836.41 KB 149x181 you're getting gassed.gif)

>>219 >Install the good one >the good one Yeah let's download half the programs ever made with a revolting mixture of ancient programs needed to run modern programs with some random obscure shit and incorrect version options thrown in for fun. If I needed anymore reason to hate computers than I already do I would've fucking asked.
>>223 What are you even talking about?
>>223 a lot of poeple have reported errors with the one click version (including myself) install AUTOMATIC1111.
>>738066 Could be useful in extreme scenarios.
(105.39 KB 220x216 boo.gif)

>>738066 >dumb Twitter screenshot from months ago Did you mean to post this to GG? I know this is the AI art thread but process of AI art and GPT generation aren't the same thing. That screenshot has also been conveniently cropped so you can't tell it's fake, the supposed article dates from years in the future. It's a think piece framed as a hypothetical future headline. Also, you'd tell very quickly if something like this was happening. GPT is impressive but it's not convincing as a real person, play with AI Dungeon and you'll see very quickly it trails off and becomes nonsensical. Any "heavenbanned" user would figure out very quickly it's happened after a back and forth conversation becomes incoherent and illogical. This is just Black Mirror level fearmongering.
(513.35 KB 1146x1148 64e.jpg)

>>227 >>738066 it's literally bait article
>>228 I mentioned that in my post if you spend 20 seconds reading it.
>>229 Im agreeing with you nigger that's why i linked your post to that guy
>>224 From that description, sounds like Steam to me.
>>217 To be honest there's a reason i always spice all content i enjoy. Normalfaggotry is omikron corona of life.
>>202 Seems like it'll be necessary to make a dedicated model for pixel art, since none of the general purpose models seem capable of doing a decent job at it. >>213 🗿
>>223 >>224 Neither python nor git was version specific in the process so I don't know what he's on about. Trying to wrap my head around how a list of directions and links is such an impassible barrier. ITT there is spoonfeeding >>37 yet that's still not enough? Just do the step then do the next step until there are no steps left. It shouldn't require tech support with Remote Access controlling your mouse cursor to install a handful of things. In the process of installing Automatic1111 Web UI there's a batch script which downloads and installs a bunch of things from a list of requirements. This ensures a large chunk of the process is automated. You just have to install git and python, use git to download the Web UI as explained here >>37 , download a model.ckpt ( place it in models/stable-diffusion ), once you've done that there's a batch script in the Web UI folder called webui-user.bat so you double-click that batch file and it downloads the rest. Takes some time to download things, but when it's done it runs and gives you a URL to put into your web browser. After that, that's it. You got everything.
>>223 Be honest. You weren't born before the release of the first iPhone, were you.
>>236 >>235 >>232 >>225 >>224 >>223 >>219 >>737020 >>204 >>736801 >>22 >>23 >>24 >>27 >>37 Alright, let's get this tech support and mockery shit over with. I've made a quick video Maybe too quick. Just pause it where you need to. Kept it concise so no funny clips. Everything needed is obtainable here: https://rentry.org/voldy Video shows most things being done with the mouse cursor pointing things out. So if you need to see the installation process demonstrated the video may help. Hopefully, this demonstrates enough to at least get it installed..
>>237 >that 8chan logo animation at the end Irecognizethatbulge.png Mah nigga, I'll probably use this later thank ya m8, dunno how you keep making good videos but keep it up
>>238 >dunno how you keep making good videos but keep it up With how often Adobe Premiere crashes I don't know either.
(264.49 KB 594x667 unknown.png)

>>240 Noice.
(31.97 KB 329x283 shgydgy.png)

(10.20 KB 203x97 no_idea.png)

(65.30 KB 541x651 shiggy.jpg)

>>178 S-shiggy diggy...?
>>213 AWAKEN MY MASTERS
(313.18 KB 512x512 00009-813234746.png)

(255.12 KB 512x512 00003-1822421662.png)

>>242 Incorrect.
>>178 >>244 Is it an image of somebody putting hands on their head?
(168.01 KB 444x669 unknown-37.png)

>>245 >>6 Oh fuck! Could you try to do some like these but in blue and white porcelain instead of marble?
>>244 This is fine?
>>247 I NEED that image.
>>248 I'm on to you invader
>>248 It seems to be struggling more with that. It knows the general direction—the prompt that generated these made no mention of colour, so it's getting the blue and white from the "delft china porcelain" which means it does know what that is—but it's having a hard time merging it with the concept of the character.
(302.88 KB 512x512 00007-3189957000.png)

(333.79 KB 512x512 00016-2119506867.png)

>>246 >>249 Incorrect. Second pic is a clue.
(1.77 MB 480x480 1622495012586.webm)

>>237 Doing God's work for retards like me anon
>>254 Solid snake with the brick camo
(312.52 KB 512x512 00029-316771496.png)

(290.22 KB 512x512 00004-2976299772.png)

>>256 Incorrect.
>>257 Holy fuck this is one hell of a puzzle I've the slightest what it is
>>257 Second pic reminds me of Coolpecker shiggy diggy.
(840.76 KB 896x1408 1667711108697853.jpg)

(780.82 KB 896x1408 1667711176090555.jpg)

(855.02 KB 896x1408 1667709957604088.jpg)

(711.55 KB 896x1408 1667710209243376.jpg)

(723.89 KB 896x1408 1667710417783447.jpg)

>>260 >Killer Klunny From Outer Space wew
>>252 You might have to be more specific, something like "white porcelain with painted blue flower decorations" How do you generate these, by the way? You produce some of the better ones.
>>739546 >my browser(s) just show a white empty page What browser? I couldn't get it to work through Waterfox Classic, but normal Firefox did.
>>262 >How do you generate these, by the way? Not really any specific trick to it. I use either Yiffy-e18, or a 70/30 merge of that with Furry-e4. My basic prompt format is usually >uploaded on e621, [safe/questionable/explict] content, by [artists], [sex, species, and number], [other tags] though I shuffle things around as necessary, such as putting the statue parts very early in those images' prompts. And I use a bunch of negatives that are probably mostly placebo. Other than that it's just experimenting, trying a bunch of seeds until you get some promising ones, and turning knobs like CFG and prompt weighting to refine it. It can be pretty unpredictable so it does just take testing, iteration, and tweaking. For example, I normally recommend against raising CFG to 12 or 14, since it tends to fry the image. But in the first grid's particular case raising CFG that high made a sudden and unexpected shift in the composition. And in the second image you can see that tiny adjustments in weighting can also lead to sudden shifts. I guess sometimes it only takes a small nudge to push the loss function into a different local minimum. Or something like that; I don't know how this shit works under the hood.
(1.09 MB 1024x1280 00000-814073393.png)

(1.03 MB 1024x1280 00001-2286639636.png)

Chicken breast.
(558.77 KB 1189x1280 3U5DEYz.jpg)

>>265 BLAZIKUNNY!
>>739560 >>739563 You'll float too.
(102.09 KB 733x1024 LOL.jpg)

>>739560 Is Qanon behind this klown???
>upgrade to a better graphics card >get errors but it still makes an output that is different from what I used to get though that might be due to having the card that produces incorrect results
>>269 Its making correct asuka's at least
(195.22 KB 644x275 60482887_p212.png)

>>265 >>266 Bilk!
(380.63 KB 640x512 00041-1188845746.png)

(415.15 KB 640x512 00063-2335727284.png)

(411.44 KB 640x512 00078-209651143.png)

(442.71 KB 640x512 00088-2922096304.png)

Nobody's getting challenge-anon's current test, so I'll post an easy one.
>>269 Maybe re-install Stable Diffusion stuff OR if you're using a Web UI try using it in a different web browser OR if you recently switched to a new video card you need to use Display Driver Uninstaller to fully wipe old video drivers first then do a fresh install latest video drivers for the new card.
>>272 Eat Fresh
>>272 The forbidden dish!
(458.11 KB 640x512 00059-3495364984.png)

(415.37 KB 640x512 00080-3161344114.png)

(435.61 KB 640x512 00084-1774056604.png)

I fucked up and didn't meant to post the second image. The logo was too obvious. Easy is one thing but that was unmissable. >>274 >>275 Correct, of course.
>>276 No clue.
(346.61 KB 512x512 00016-2108543548.png)

(326.31 KB 512x512 00009-4206759911.png)

>>258 >>259 Incorrect. These should be easier.
(89.13 KB 1920x1080 Donte.jpg)

>>278 >>257 >>254 Is it Donte?
(40.62 KB 422x317 Bick Wall.jpg)

>>279 >>278 >>257 >>254 Or this image of a brick wall?
>>278 I'm going to guess pic or some derivative of it because I really don't know sure. I had suspicion yesterday after >>257 and the new set pushes me more in that direction.
(175.17 KB 1000x1000 1341634610372.jpg)

>>279 >>280 >>281 Incorrect. I'm just gonna post it since no one's got it yet lol
>>739862 >still having the pictures of your friend slumped over your bed from 20 years ago now thats some broke back mountain shit right there man, you sure you didnt dress as a cowboy
(1.16 MB 1027x1400 a sooka.jpg)

(1.27 MB 1027x1400 teh ray.jpg)

>>284 >high-leg hooded sweatshirt onesie How long before this abomination exists in reality.
>>285 When I saw it my dick told me it was good and I couldn't convince it otherwise.
>>739863 It looks like it would be a warhammer poster.
>>282 >dubs why do you have that hanging on your wall
>>288 It's not, I'm a different guy than poster dude. Gummy worm is just gummy worm.
>>289 I see
His arms look weirdly backwards
I just had an interesting discussion with Commissar Yarric in character.ai. https://beta.character.ai/post?post=zbFBeGk494mnveXbI2icrHbtYULsIFIehjXNMp4ZNZs&share=true It took some convincing but I think I just talked a daemonette into abandoning chaos and all things heretical.
>>291 Why do they all have thick robot cocks?
(198.78 KB 1283x624 WOW WHAT A BARGA.jpg)

>>294 I noticed the same thing but didn't want to bring it up
>>294 I thought they were supposed to be tails.
(31.16 KB 332x389 benchtails.png)

>>296 >I thought they were supposed to be tails. Maybe they could be.
>>741028 Just leave when you run out of cum.
>>741044 Speaking of robots.
>>740971 >>741028 >>741040 Dude this isn't even lolicon, these look like real kids.
>>237 reduced the filesize to easily share
>>301 wait, reduced the filesize even more
>>269 These new cards have win7 drivers?
>>302 >Install Python Ew.
>>304 sorry to tell you but basically everything relating to AI is made using python.
(365.52 KB 576x576 jesus christ.png)

>>304 Stable Diffusion runs on Python
>>304 As much as I hate python, it's very good for machine learning and AI.
>>307 you mean the numpy/tensorflow library, yeah thats not python, its just a shitty wrapper for c++ libraries (90% just opengl) meant for people who only know pseudocode
>>308 I barely know shit about code, but when I heard SD needed Python, I figured as much. As revolutionary as this whole thing is, it's probably in good part being coded by hacks, and I'd bet if someone who didn't rely on Python built this shit it would probably be immensely more efficient. Anyone know anything concrete to the contrary?
>>309 >I barely know shit about code <spouts opinions about code anyway Do you use the AGDG thread by any chance?
>>310 No, but I'll be your boogeyman if you want.
>>309 Yeah it runs on pytorch, python is just how you access the underlying library that's obviously much more optimized. There's a reason it requires CUDA.
>>311 >boogeyman <he doesn't get it
>>313 Yeah, you got me, I'm totally /agdg/. I use the thread all the time and I know better than you despite never making any real gaymes.
(2.26 MB 1536x1024 tmpqs2gxgvv.png)

(2.21 MB 1536x1024 tmpw5mnjz97.png)

(2.13 MB 1536x1024 tmpgdm38_w5.png)

(2.23 MB 1536x1024 tmpuq0h_60s.png)

(2.06 MB 1536x1024 tmpega6f9ov.png)

Christ-chan with a Thanksgiving dinner.
So I was exploring the settings menu for Automatic1111's webui, and found an option that saves a text file alongside each image that has all the parameters so you can easily recreate the image later or see what prompts you used. Such a lifesaver, it's made things much easier since turning it on. Frankly, I don't know why this isn't on by default.
So I downloaded the Yiffy-e18 model and I've been playing around with sample prompts, made some decent Pokemon. But one thing I'm struggling with is figuring out how to use img2img and upscalers. For example, once I have a good 512x512 image, I'd like to increase the resolution to something like 1024x1024 or maybe even higher, but the upscale ends up just looking blurrier and actually has LESS detail than when I started. Here's an example of what I'm taking about. Both images used the same seed and prompt, I used the 512 image with a low denoising strength for the second image. Some things are better like the mouth, but the chest floof is severely reduced and the hair on the thigh has been turned into solid color. Ideally, I want the uprezzed version to have more detail, as if it was officially created in that size. In analog terms, I want it to look like the picture was taken with a better camera. How do you achieve that affect?
(1.60 MB 2048x2400 bonefather promotions.png)

>>317 Looks like you're not just upscaling. There's some light differences here and there making me think you're running the upscale with a bit of denoising. Stable Diffusion's Extras tab is pretty much there JUST to upscale. Feed your image into there if you're not. Should solve your problem
(6.55 MB 3072x2048 00002-1609488125.Fixed.png)

>>317 I have very inconsistent results with standard img2img upscaling. Sometimes it looks good but sometimes it comes out very blurry, like you've got. I've seen people crank step count really high for img2img (~100) so that might help, but I haven't tested much. More often I use the SD Upscale script, which is in the dropdown at the bottom of the img2img tab. In that script, it uses an upscaler of your choice (e.g. ESRGAN, SwinIR, Lollypop) to upscale the image, then it splits it into chunks of the size you set (using the sliders that normally control total output size), and runs i2i on those chunks independently. The positives are that it combines the best of both worlds (the interpolation of dedicated upscalers, and the img2img to make the interpolated details more "real"), and that the final image resolution scales with run time, rather than VRAM. This Marisa image would have been impossibly large to do in one go without OOMing. The negatives are that since it works on each chunk independently, any given chunk will usually not match the prompt very well, and different chunks can get inconsistent with each other. But often that can be resolved with low denoising, or by taking multiple runs with different seeds (or even different prompts) and merging the best bits together. Of course, that sort of mashup is useful even outside the Upscaler; compare the first image of >>15 to the original >>17, where the eyes in the final version were taken from another txt2img generation with a slightly adjusted prompt on the same seed, and pasted in with GIMP before the upscaling. For more detail see https://rentry.org/sdupscale I don't generally recommend raw upscaling like >>319 suggests though. Usually the SD Upscale script is outright better, since it uses the exact same upscalers and then runs i2i to tweak them.
>>316 It's mostly not necessary since it already saves prompt info in image metadata. I have a folder I set aside for original generations, so that even once I edit them (and GIMP wipes that metadata) I have those originals to reference. I figure it's more useful to have the actual images, than to use the text files and not remember what anything actual gives.
>>319 >>320 Thanks for the ideas! I went ahead and used the Extras tab to do a basic upscale with ESRGAN and that worked pretty well, although the result had a sort of "crusty" look to it. So then I tried putting the upscale back through img2img and experimented with different settings to get some interesting and great results. All of these are using the same seed which helped with creating an apples to apples comparison of settings. Here are the results: >CFG 7, denoise 0.4 <good baseline, mostly fixes the mouth and got rid of the crusty upscale artifacts. Oddly, the style changed a little bit (such as on the chest floof) >CFG 7, denoise 0.6 (not pictured) <Fixed the mouth, but the midriff under the left breast is a bit odd, and a lot of the cleavage floof was tragically lost. Increasing denoise strength further caused the body to morph in odd ways (at 0.8 the arms completely dissappear). 0.4 seemed to be a good sweet spot in this experiment. >CFG 14, denoise 0.4 <the image is getting sharper with more contrast, with additional fluff on the arms. Her face hair has some black on the fringe. I'm liking this direction. >CFG 20, denoise 0.4 <darker shading, a tree appears on the edge of the frame for some reason >CFG 28, denoise 0.4 <ok this might be taking it too far, but it's very striking nonetheless. Her ear hair now looks very much like flame now that it's gained deep wavy locks. Her tail is burnt black at the edges Next I'll have to try tweaking the sampling steps. If I were to continue, I think I could take the best results here and try using them as input for another round of img2img to try and fix some of the remaining issues, such as the weird mouth. Also I need to try inpainting, I haven't got the hang of that yet. Next time I'll try using the SD Upscale script technique, but so far it looks like I got pretty good results from a simple dumb upscale into img2img tweaking.
>>322 >a basic upscale with ESRGAN and that worked pretty well, although the result had a sort of "crusty" look to it Yes, ESRGAN puts a lot of importance on "sharpness". Which the best upscaler is depends on the art style, and it can take some experimenting. In fact, I even mix them pretty often. These foxes in the first two images were 4x upscaled, with either SwinIR or Lollypop for the first round (I don't remember). ESRGAN in the first round led to some tiny noise getting blown up, but the other upscaler was too smooth, but both together resolved it. 4x upscaling is never perfect though, as you can see from the artifacts along both images' left arms. Meanwhile, for the hyena, I ran the same image through the upscaler two separate times, once Lollypop and once ESRGAN, and merged them together in GIMP with some level of transparency. That added a bit of the texture ESRGAN over-emphasized, but Lollypop over-smoothed away. Then I erased the ESRGAN layer entirely for the clothes, since the pure Lollypop texture suited them better. And for the alien in the fourth image, almost all is ESRGAN, except the maw, since ESRGAN over-sharpened that into an absolute mess of red and white pixels that looked more like noise than any structure. I think the mouth was SwinIR in this case since it was before I installed Lollypop. Or it might have been BSRGAN? Anyway, when you zoom on that part in the difference in noise and sharpness is obvious, but it still looked better than either alone. >Next time I'll try using the SD Upscale script technique, but so far it looks like I got pretty good results from a simple dumb upscale into img2img tweaking. Yeah, if you have the VRAM get the resolution you're after, then manually plugging an upscale into i2i is basically the same as the Upscale script but without needing to worry about the chunking effect on prompts. The script really is most useful when getting large, and also for being able to do both steps at once (particularly useful if you're trying different combinations of upscale algorithms and i2i settings). Come to think of it. I assume you could probably set the chunk size in the script to the upscaled size, so it'd only do one chunk, and would just be a simplified pipeline for what you did. I ought to try that.
>>321 >It's mostly not necessary since it already saves prompt info in image metadata. That's true, but you have to be careful because most sites and a lot of software will remove the metadata from files when you upload. For example, all the images posted in this thread by other people give me nothing when I put them in the PNG Info tab of Automatic1111 which is a huge bummer because I'd love to steal some of 401d53's prompts. With a text file I know I'll always have that data stored somewhere safe. Plus in the future I can imagine the possibility of using a script to grab promps from text files to easily automate the process of making more images from my successful runs. I recently came across a video demonstrating some scripts that do something similar. You can have a promptfile where each line is a different prompt, and it'll generate images for each prompt in one go. So you could try different artists in your prompt, or things like that. https://yewtu.be/watch?v=YN2w3Pm2FLQ I haven't tried these scripts myself yet but I definitely intend to. They seem really useful for iterating through the many different combinations of settings and prompts that's necessary to find the look you're going for. >>323 >And for the alien in the fourth image, almost all is ESRGAN, except the maw, since ESRGAN over-sharpened that into an absolute mess of red and white pixels that looked more like noise than any structure How did you use a different upscaler for just one part of the image? Is that where you upscaled the image twice and selectively blended the results together in Gimp?
>>324 >all the images posted in this thread It is a pain. On the bright side, boorus and Pixiv do preserve it, though Pixiv may lose it anyway if the artists edit in censorship. >I'd love to steal some of 401d53's prompts Any in particular? >Is that where you upscaled the image twice and selectively blended the results together in Gimp? Yes. I just stuck the ESRGAN version in a layer on top of the other, erased the mouth part so the other upscale showed through, and called it a day.
>>325 >Any in particular? Are these yours? These are from the first couple threads. >Yes. I just stuck the ESRGAN version in a layer on top of the other, erased the mouth part so the other upscale showed through, and called it a day. Nice. Using traditional image manipulation software to tweak the AI output is a whole other dimension that I haven't explored yet.
By the way for any interested in the prompt for the Braixen image in >>317, here's that: >uploaded on e621, explict content, anthro female (braixen), by (Personalami), honovy, Michael & Inessa Garmash, Ruan Jia, Pino Daeni, darkgem, iskra, foxovh, ross tran, (seductive pose), ((detailed face)), ((detailed fluffy fur))), detailed realistic painting, (((shaded))), extreme detail, (detailed eyes), (pinup), (seductive smirk), huge breasts, solo female, (realistic texture),crisp image, high resolution <Negative prompt: extra limb, clothes, underwear, deformed, blurry, bad, disfigured, poorly drawn, mutation, mutated, ugly, (((out of focus))), depth of field, focal blur, (extra body parts), feral, sex, duo, human, breasts And yes, "explict" is on purpose. Apparently when they trained this model, they mispelled the tag somehow.
I'll throw in a random non-furry image from my output folder so it's not just a thread of spoilered furry porn. >>326 Yeah, those are mine. The first one won't be much help though. The original txt2img is >e621 (wolf) (dominant female) (looking straight up) explict by (Pino Daeni) chunie (bonifasko) greg rutkowski artgerm anthro female (((imminent facesitting))) squatting (pov) smug evil (evil smile) smirk (worm's-eye view) first person view pussy solo standing directly over (low-angle view) center frame >Negative prompt: (poorly drawn face), mangled, mutated, extra limbs, panels, (watermark), feral >Steps: 50, Sampler: DDIM, CFG scale: 10, Seed: 754879069, Size: 512x512, Model hash: 50ad914b But you'll obviously notice that it doesn't really describe the image at all. It was "failed" generation I decided to play around with, and I converted her to a futa through inpainting, which would be almost impossible to replicate. As consolation, here's a later version I made with fixed hands. The other two were very similar. In fact, the witch was the result of a large batch of the same prompt applied to various fantasy classes. The wolf was >uploaded on e621, questionable content, detailed fluffy fur, by michael garmash, ken sugimori, Pino Daeni, fur tuft, fluffy tail, female anthro wolf, hair, grey fur, solo, cute, detailed realistic painting, (armor:1.2), (shaded:1.2) extreme detail, front view, (fullbody:1.2), suggestive, loincloth, unconvincing armor >Negative prompt: male, penis, balls, fleshpile, deformed, blurry, bad, disfigured, poorly drawn, mutation, mutated, extra limb, ugly, horror, out of focus, depth of field, focal blur, baby, pussy, nipples >Steps: 40, Sampler: Euler a, CFG scale: 7, Seed: 3332901358, Size: 512x768, Model hash: 60cb5fb9 With the highres. fix off (by accident, but it turned out for the better). The witch was >uploaded on e621, sfw, detailed fluffy fur, by michael garmash, ken sugimori, Pino Daeni, fur tuft, fluffy tail, female anthro fox, hair, brown fur, solo, cute, detailed realistic painting, (fantasy witch:1.2), (shaded:1.2) extreme detail, front view, (fullbody:1.2) And otherwise the same. Note that that model hash corresponds to a 70/30 merge of yiffy-e18 and furry-e4.
>>328 Thanks anon, you rock.
>>328 >dickgirl furry You've brought shame on your family
>>322 >>323 >>326 >>328 >Furshit >But instead just rainbow colored smooth skin humans with ugly dog heads, big uguu anime eyes, and maybe a hint of fluff in the outline <Now it's actually fluffy animals girls with ugly dog heads and big uguu anime eyes Das it mane.
>>323 4th one looks like a blood corrupted outrider knight.
>>237 >>301 Still looks fine. >>302 This one has noticeable artifacts especially when fast-forwarding through the installer prompts or snap zooming. Why isn't it 720p anymore? Well, whatever. If you want a better source to start from here's a bigger MP4 of the video: https://anonfiles.com/xad1A4Jby1/Quick_Guide_-_Install_Stable_Diffusion_mp4
>>303 The 3060? Yeah, in fact, they just issued a security update on earlier this week.
>>306 >Stable Diffusion update removes ability to copy artist styles or make NSFW works https://archive.ph/s4FTB
>>335 So which public AI generator creates NSFW smut now?
>>335 >it's an update that makes everything worse Every fucking time. Are there still copies of the old version?
>>743020 What error messages were you getting? >I'll just be waiting for the real actually good version that's a nice little self contained program that fucking works. You are going to be waiting a long time for that.
>>335 That has nothing to do with the python. Also this article is vague. What's been filtered, the default model? If so that doesn't matter, you can switch that out or train porn on it as a base.
>>743020 >also needing Visual studio Everything needs Visual Studio C++ runtimes. Video games especially. >1080TI It should recognize that GPU and be able to use CUDA. My suspicions are you need to either reinstall video drivers, reinstall runtimes, or your OS install is fucked up somehow, all assuming you actually let the dependencies install uninterrupted - if you didn't let that finish of course it's broken. >waste my time both downloading and installing half the internet >because of the 10 billion dependencies FFS you whinge about having to install a few things manually, then the Automatic1111 batch script does the rest for you but you whinge about that too. Yeah, it takes a long time. The video makes a point of assuring everyone that it's "not stuck!" during that part because they might assume it is. Post the errors if you actually want help. If you have no intention of doing anything beyond failing and complaining then any mockery you receive is earned.
>>335 >Stable Diffusion update removes ability to copy artist styles or make NSFW works Should have removed all the watermarks they fed their AI model. <It also includes a new NSFW filter from LAION designed to remove adult content There's nothing in the article saying that filter can or cannot be toggled. Reads like click bait. Other models will do what the nerfed SD model won't. The leaked Novel AI model was doing so already.
(263.67 KB 550x1772 DeviantArt.png)

LOL
>>335 >>337 >>341 It's inevitable that this will happen. The threat of legal action arising from DMCA or from AICP will eventually just get too great. Some developers will get the picture earlier than others and act preemptively, that's all.
>>343 >DMCA You can't DMCA something you didn't create. All the AI person has to do is demand the artist show original work. They won't be able to do it, because the AI generated it, not them. Countersue and put these faggots out of business. Hell, DESTROY THE DMCA ITSELF, because you can't enforce something when anyone can create their own perfect copy uniquely, legally.
>>337 The old versions of Stable Diffusion are open source and can't be removed from the Internet, nor all the custom models and forks communities have made. You don't have to worry about that. However, it remains to be seen if the open source community will be able to keep up with closed source companies when it comes to innovating and improving AI. Open source has the advantage of not gimping their models to fight copyright, but this stuff is complicated and expensive. Apparently it cost around $600,000 just to pay for the GPU time to train Stable Diffusion 1. So any open source project will need a crazy amount of donation money or a large volunteer community to train future models. This isn't something that anybody can just create in their spare time.
>>344 >what is sampling I'm sure you're too young to remember the big blowup a few years ago about samples being used in music and how the regs got tightened up about that. And image sampling is all that AI programs do. If even one copyrighted image is used to train a model then that model is in a legally perilous position. Same way that if you started off with a copyrighted picture from whoever and then you proceeded to photoshop it until there's no more that 1% or less of the original remained, you're still fucked if the right holder wants to make a complaint against you. It sure doesn't help their case, or yours, that the fucking AIs keep trying to reproduce the Getty watermark and occasionally even artists signatures. Chances are ALL of the current models are "contaminated" and will have to be trashed and replaced with new ones trained off of open source images.
>>346 I thought AIs merely "learn" from the image and create their own thing, more akin to reverse engineering than sampling? Its not like the AI is editing the original image or otherwise touching it in any real way besides to take it into its bank, isn't it?
>>743020 It never ceases to amaze me when I see people like you who are too retarded to install a program that does everything for you. I know there are people that retarded out there, but I don't know how they figured out how to log in to their computer or install a browser. >>341 >There's nothing in the article saying that filter can or cannot be toggled. Reads like click bait. How exactly do you expect to "toggle" a setting on the training set for a model that has already been trained on the censored set?
>>347 It's not even really a "bank". This shit is trained on trillions of images. Even with maximum compression, you're not going to be able to fit the images on your computer for your local version. Ergo, it does not have the images it's been trained on and it not directly referencing them when generating images. If it's not literally taking any part of existing images, it can't be violating the copyright of those particular images. It's akin to suing someone for looking at a painting and then drawing of very imperfect copy of that painting from memory.
>>346 When you sample in music you are using clips of the original music. That is the sample. You can point to specific parts of the remixed song and say "here is the sample". The AI is not using any of the original images. There is no "sample". >if you started off with a copyrighted picture from whoever and then you proceeded to photoshop it The AI is not starting off with a copyrighted picture. You can set step count to 1 if you want to see roughly what it starts out with. It's noise. >the fucking AIs keep trying to reproduce the Getty watermark and occasionally even artists signatures. Of course they do. It's a common, repeated feature that frequently appears, no different than faces. If you locked a human artist in a box had him learn to draw off the training set, he'd be able to draw a Getty watermark too. >Chances are ALL of the current models are "contaminated" and will have to be trashed and replaced with new ones trained off of open source images. What does "open source" mean here? All of the images are freely available online, since that's where they got them. Posting an image online is, implicitly giving permission for it to be seen. Furthermore, permission to see it is permission to learn from it. EVERY human artist learns heavily from other artists' work, to the point that there is a specific term "outsider art" for the exceptions that still learn from others' work but only mildly. It is impossible NOT to learn from the images one has seen. By giving permission for the images to be seen, people are giving permission for the images to be seen by artists, from which it necessarily follows that they are giving permission for the images to be learned from. But if you mean to only train AI on images with explicit permission to use them to train AI, then you are simply calling for an outright ban on AI image-gen.
>>347 >>349 >>350 It's still fruit of a poisoned tree and the first time someone big enough decides to send developers a C&D it'll just cause a chain reaction through the whole field. They're clearly concerned about or SD wouldn't be removing the ability to generate images based on an artists name or handle. >>350 >All of the images are freely available online, since that's where they got them. LOL No. That doesn't make something open source. There's PLENTY of shit on boorus that is in a grey area (fanart) or is flat out official materials that are under copyright. >Posting an image online is, implicitly giving permission for it to be seen. Furthermore, permission to see it is permission to learn from it. EVERY human artist learns heavily from other artists' work, to the point that there is a specific term "outsider art" for the exceptions that still learn from others' work but only mildly. It is impossible NOT to learn from the images one has seen. By giving permission for the images to be seen, people are giving permission for the images to be seen by artists, from which it necessarily follows that they are giving permission for the images to be learned from. The law is going to end up treating "AI" different from a human learning from some other person's art and you very well know it. >But if you mean to only train AI on images with explicit permission to use them to train AI, then you are simply calling for an outright ban on AI image-gen. Why though? If AI is truly worth its salt then it should be able to observe the physical world and interoperate that into art. Or the AI could dream and draw what it saw. Or imagine and draw it. Humans do that and if AI is generation is equivalent to a human learning to draw art the AI should be able to do the same.
>>351 >It's still fruit of a poisoned tree Your analogy isn't helping hide your lack of understanding about the technology.
>>351 >If machine learning is approaching human learning in one field, it ought to be totally sentient already Absolutely tech illiterate.
>>351 >There's PLENTY of shit on boorus that is in a grey area (fanart) or is flat out official materials that are under copyright. That's a completely separate issue. If fanart is a legal concern then it's a legal concern even when a human draws it, and the AI part is irrelevant. >If AI is truly worth its salt then it should be able to observe the physical world and interoperate that into art. Or the AI could dream and draw what it saw. Or imagine and draw it. I honestly can't tell if this is intentional mockery or if you actually buy into the "AI" marketing term this hard.
>>354 point is fanart has dubious ip violation just like with generated images, its like those meme sonic fanarts that have so many different ips from different people that as a whole if you make a copyright claim against it youre basically fighting for a <1% stake
>>352 >>353 >>354 Oh no I totally get it. AI isn't sapient or anything. It's just algorithms. But that's the whole point. You've got people going "AI is just a tool like muh photoshop and muh cotton gin" and then flip right around with "AI is learning and producing just like YOU an ME". People are trying to have it both ways and they're trying to muddle the point about AI generated art. >If fanart is a legal concern then it's a legal concern even when a human draws it, and the AI part is irrelevant. That's the whole part of what I'm concerned about. If AI generators truly are unDMCAable like anon claims then the nest step is to target the source the AI is trained off of. This AI stuff could end up causing a crackdown on fanart because of copyright concerns. And on artists posting their original art online. >>355 Doesn't stop them on youtube.
>>356 >This AI stuff could end up causing a crackdown on fanart because of copyright concerns That wouldn't really make much sense. It's the artists drawing the fanart that are more upset. To the IP holders that fanart is based off, what does it matter where the fanart came from? If they're sane enough to know that fanart is a good thing for them, they'd have no reason to crack down on AI stuff, and if they don't like fanart then why would they accept the human stuff?
>>346 >what is sampling I don't give a shit. It's not illegal to use the same frames or the same phrases or licks. Shove your condescension up your ass, because you're talkin about abject bullshit that isn't law.
>>351 Anon, your legal arguments are getting increasingly retarded. There is no poisoned tree to begin with. Sampling is not occurring. You are correct that they are scared, but the fear doesn't necessarily stem from them being scared of actually violating the law, it can just as easily be the sheer threat of lawsuit costs due to companies being able to afford bankrupting people without having legal justification. They were fine with it before until the threats came in. Lawsuits costs anywhere from hundreds of thousands, to millions of dollars. All that needs to be done by a corporation is a bunch of delay tactics and bullshit to fuck someone over and everyone with a brain knows it, it doesn't matter if you're legally correct but end up bankrupt and can't pay the legal fees, especially if they bought out all the district and appellate level judges, and your only chance to win is a gamble on the Supreme Court which would take literally years in most cases. >The law is going to end up treating "AI" different from a human learning from some other person's art and you very well know it. There is a very real slippery slope that will head towards, and it will fuck up a lot of things for them if they try to do that. New kinds of machine learning is quite literally based on the ways humans learn. >a bunch of irrelevant hypotheticals that are based on AI choice and distinctly human capacity (i.e. free will) rather than AI capability; an AI can draw based on photographs of the natural world, it can draw based on someone else's dreams. Humans being able to do that is completely irrelevant because the history of art and human learning is almost entirely iterative. Most people don't just "make shit up", they think up and imagine things they already knew before into new contexts, like what AI can do. I'm confident an AI can produce completely random images, your argument on that theoretical would get completely fucked otherwise.
>>348 They don't call it a "training set" they call it a "filter". The only mention of the word "train" in that click bait article is in a sentence stating that you can train it to mimic art styles anyway. What does a filter do? It excludes something. That something must exist in order to be filtered. It's like if you install ad-blocking to filter ads. The ads have to exist to be filtered. It doesn't stop ads from existing, it filters them.
>>360 LAION is the training set. If they're talking about a LAION filter then they are talking about filtering the training set. >What does a filter do? It excludes something. That something must exist in order to be filtered. Yes, because there is NSFW content in the full LAION training set, but they made a filter to exclude them from the actual training. The release announcement for 2.0 is very clear about this: >These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter.
>>743020 >>340 Software Dev here. A question I can answer. Why does everything need the Visual Studio VC or VC++ runtimes? It's a pain in the ass. I will explain this. When you develop a program with Visual Studio, Visual Studio uses a different version of the C/C++ runtimes than Windows, and it doesn't give the option to link against the operating system version. So, you either have to dynamically or statically link your programs. Statically linking your programs requires you to know a modicum of information about how to configure a project in Visual Studio. As most developers who use Visual Studio use it because they are fucking idiots, that doesn't happen. Dynamic linking is only ever required when you use external libraries as DLLs, and only some of those (depending on resources shared across the DLL boundary). Video games generally bring in a bunch of other people's shit as DLLs, and so they dynamically link. People generally don't concern themselves with this on Linux, as the compiler uses the operating system's runtimes for the C/C++ library, and programs are usually recompiled for the Linux distribution in question. There's never a mismatch. Only groups who specifically make "run on any Linux" binaries care to statically link on Linux (or, those who want to throw the resulting program into a raw container).
>>361 Wasn't stated in the article. Had to click links to see that. So it would have to be leaked before the nerf to not be shit. Can V1 models be converted to V2? It'd require a fuck load of training to un-nerf the sd-v2.0 model if it can't even do artist styles. Creating an entirely new model for V2 would be preferable as the SD model on V1 had a ton of DREAMSTIME watermarked trash from google images which I doubt they bothered to get rid of since their focus was on lobotomizing nsfw. >>362 I always figured it required some extra effort that no one bothered with due to laziness or incompetence so everything needed a new version. You've confirmed my suspicions. It's the same with .NET Framework as well I'd assume. Any application using .NET Framework gets an update and suddenly you are forced to install a newer .NET Framework to continue using the new version of the application. Java applets can do that shit too. Also if a game is updated to the "newer" Unity engine it breaks shit and a lot has to be re-made entirely. Developing software always sounds like a pain in the ass.
>>346 >If even one copyrighted image is used to train a model then that model is in a legally perilous position. AI art isn't like traditional sampling cases at all though. A model is just a bunch of regression functions that take one number and output another. As >>349 says, a user's computer doesn't have any of the images that the model was trained on. The law is simply not settled on the topic of AI, nobody yet knows what will happen when we start seeing legal challenges. At best, we can assume that courts are full of tech illiterate judges and even more illiterate juries, so they'll probably take the most pro-corporate interpretation. >>363 >Can V1 models be converted to V2? I don't think that SD1 and SD2 are compatible like that.
>>743329 >someone gives suggestions about the only trace of useful information you provide >your response is fucking literal babytalk as you throw a tantrum blaming everyone and everything except your own incompetence I hope nobody helps you because you're too much of a nigger to deserve it.
Has anybody yet created a booru dedicated to hosting AI artwork? It's a pain in the ass trying to search around Twitter and Discord and Reddit looking for people's creations. And Lord forbid if I have to go back to cuckchan just to lurk their AI threads. I just want to browse a booru with proper tags, preferably the same tags those people used to make the art in the first place.
(32.24 KB 482x549 tired anime face.jpg)

>>743329 Wallow in your own failure then. You willingly chose to be a retard.
>>366 Never saw one. I'm surprised the AI art generated here didn't wind up on ourobooru since most everything gets dumped there. >preferably the same tags those people used to make the art in the first place Not possible given how most sites strip the tags out of the EXIF data so by the time an image makes it to such an archival site that you're suggesting there'd be no information about how it was generated. It was requested that a checkbox be added to keep EXIF data: >>>/site/7341 in the site cyclical thread, but I don't think that feature is going to happen as it's such a niche use case and might also be abused to share illegal shit to get the site v&.
>>366 AIbooru.online is an AI offshoot of Danbooru. It both preserves metadata and has a field for prompts, as well as relevant tags for images that have them.
>>368 >I'm surprised the AI art generated here didn't wind up on ourobooru Ourobooru is currently on pause while it gets archived and reuploaded to some place that isn't The Booru Project and won't shoah everything for a few wrongthink pics. >>369 >AIbooru.online >By submitting Content to the Site, you grant AIBooru a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, distribute, create derivative works from, allow downloads of, and display Your Content in all media formats and channels now known or later developed. LoL >Prohibited Content >Realistic or photorealistic images of loli or shota characters. >Off-topic images (i.e., actual photographs, game cgs/screenshots.) >Bara and scat content of any kind. <Allows loli shota, doesn't even require flipping any switches for fringe content <Singles out and bans a couple of shit fetishes the site owner doesn't like LMAO, even. >community rules >Be friendly to other users. >Don't insult, attack, abuse, threaten, or harass other users. >Don't engage in hate speech, use racist or sexist slurs, or engage in other hateful conduct. >Don't send spam. >Don't make usernames that insult, harass, or impersonate other users, or that use slurs, hate speech, or other derogatory language. Top wew. At the very least, I don't believe they're part of The Booru Project. Couldn't find the usual links anywhere.
>>370 >Ourobooru is currently on pause while it gets archived Is it? There are recent posts on it so I don't think everyone got the memo on that. >>369 >>370 >hate speech A concept based entirely on subjective emotions and which is non-existent under the law of most countries. TL;DR lol
>>371 >There are recent posts on it so I don't think everyone got the memo on that. Most people did, I think. Which is enough. So long as more shit isn't being added in bulk.
(258.34 KB 803x610 are you done.jpg)

>f1ea8b Seen some claim "borderline" in response to things ITT, but it's not enough for feds to seize the domain so of course suspicious posts appear which encourage pushing it beyond borderline thus spurring more paranoia around non-normalfag content making it more difficult to post loli at all. The angle is obvious. It's all so tiresome.
(77.12 KB 944x510 racist_ban.png)

(87.02 KB 1056x728 loli_games_not_allowed.png)

(137.03 KB 746x926 loli_banned_on_b.gif)

(159.66 KB 1268x656 nigger_banned_on_pol.jpg)

(379.56 KB 1280x698 banning_comics_from_co.jpg)

>f1ea8b Go be a lying nigger on cuckchan if you prefer it so much.
>>374 Just what you'd expect I guess, but still sad and pathetic that even their /b/ bans it not even edgy enough to be banned on twitter yet, but its banned on cuckchan pathetic
>>743020 I will atleast agree that cuda is a fucking pile of shit. T. ayyymd user
>>373 It's fucking LCP, you tard. Report and ignore.
>>377 Yeah, I recognized and reported them. While waiting around for a banhammer it's more gratifying to send them out the door with a middle finger as well. >>743873 >I stay the same ID, why do you think that is? You want everyone to know you're the one retard who can't get it working and you have a severe humiliation fetish?
>>378 >While waiting around for a banhammer it's more gratifying to send them out the door with a middle finger as well. Neck yourself you underage faggot. Don't feed trolls. Basic knowledge. Maybe one day Acid will learn this.
I keep trying to use GPT-3 to write mind control fetish stories for me, but it keeps getting sidetracked with "but then he realized mind control is evil and stopped" I have found the anti-Skynet.
It's been over a week since anyone posted fresh AI art in this thread.
>>382 I'm waiting on the rest of my PC parts to arrive on Monday before I start making more. My new 3060 has introduced some kind of instability in my ancient build and now my normal ram is stuck running at 1066hz.
>>384 I can't put my eye to it, but it looks like the rabbit rabbit girl?
>>384 Guy sipping on a mug of tears? Could be the classic jew too.
>>385 >rabbit rabbit girl Who? >>386 Nope
>>384 ISHYGDDT?
>>387 Tewi.
>>392 This ones is pretty cursed
>>393 >>394 >>395 >>397 >>398 >>399 Isn't this just Stable Diffusion with the leaked Novel AI model? Because that's very much what it looks like.
>>400 No, whitey. Is very original superior Chinese creation! Prease take selfie for data collection cute anime version of you. You like, yes?
>>400 >>401 They changed it so that it can't make black people so its clearly a modified training set at the very least.
>>393 >>394 >>395 >>397 >>398 >>399 This is literally just anythingv3 stable diffusion but tied to a proprietary chink botnet why tf would anyone use this
(379.46 KB 512x640 00019-4065569966.png)

(337.90 KB 512x640 00024-1257790875.png)

>the NAI anime model can draw Gardevoir fine, but the furry model turns it distinctly human How odd.
>>408 It's trying to tell you something
>>409 Is it telling me to stop wasting time on such a humanlike design, and go full furbait instead?
>>410 I mean if you are, at least don't go making gay shit.
>>411 That anon has been doing gay and futa shit since he started posting in these threads, what do you expect from a furry?
>>410 >human and canine hybrid genitalia >the AI knows It's becoming too powerful!
(1.39 MB 1100x1400 Xmas Peach Slipped in Snow.jpg)

>>6 Requesting Christmas-themed AI art from other anons. >>7 Here's one to start.
>>396 >>414 Christmas you say?
>>415 >>414 >AI thinks this is sexy
>>417 >When you ask for sexy Christmas and it gives you moon over june
>>406 God that little elf in the fifth pic.
>>410 At least that's better than giving it a blue human dick. >>420 I'm surprised, I assumed that furfags would make sure it could distinguish at least the most popular dicks. I thought they were particular about that shit.
(497.85 KB 512x640 Coca-Cola Polar Bear.png)

>>415 >>416 >>417 >>418 I still don't know what the base image is. >>420 Neat. It had to be done
>>423 CEASE
(496.27 KB 704x768 Drunk Hug Xmas Wine.png)

(387.38 KB 704x768 Drunk Hug Xmas.png)

>>424 It reminded me of those Coca-Cola Polar Bears and the idea made me lol. Have a drunk ara.
>>425 CONTINUE
(734.39 KB 704x960 Don't Sleep on the Floor.png)

(586.64 KB 704x960 Keep Your Voice Down.png)

>>425 >>426 >>7 >CONTINUE Okay. I couldn't roll the exact same hair style. Close enough.
>>420 >>425 >>427 While you're in this particular mood try to generate some sexy Krampus'. Krampii? Krampussys?
This pleases me immensely
(492.25 KB 512x704 1670374080_3036065191.png)

(521.86 KB 512x704 1670371578_512287469.png)

(534.83 KB 512x704 1670374542_1643810612.png)

(590.75 KB 512x704 1670374542_3641465419.png)

(575.10 KB 512x704 1670375241_2745956529.png)

Not christmas theme related yet but i tried to make some wizards and sorcery themed pics and made a few progress with the feather variants, probably gonna need to img2img the last one.
(293.55 KB 640x360 council of nice.webm)

>>430 Nice birds.
>>430 It looks fucking great.
(83.34 KB 360x270 kyun.png)

>>430 What The Owl House should've been
(131.15 KB 1024x768 1656572252920.jpg)

How powerful of a setup would I need for this? Can a laptop even run this or will it thermal throttle a lot?
>>434 It doesn't take much; I know there are people running it on GTX 970s, slowly. Recent laptop GPUs would definitely be able to run it in theory. Whether the thermals would be an issue as you point out, you'd have to try it and see. The particular concern would be VRAM temperatures, since this gets them quite a bit hotter than gaming does, but they tend to get less attention in cooling solutions.
>>435 I tried it out with a GTX 970 but the pictures are always pretty low quality and blurry compared to any pictures I see posted
>>436 >I tried it out with a GTX 970 but the pictures are always pretty low quality and blurry compared to any pictures I see posted Without details of which model, prompt and settings you're using, and what your output was, it's a bit hard to diagnose what went wrong. Lacking this information I can offer the following generic suggestions: Put "blurry" in your negative prompt. Try a different model. Some models just don't work well with some subject matters. Increase the width and/or height of the image. If the subject matter is too cramped in a 512x512 image, increasing the size of it may give it space for the details to come through. I would suggest a limit of 768, as going higher increases the chance of repetition in the image. Try copying other peoples prompts and see if you get the same (or at similar) results. Make sure you use the same settings, especially make sure the model, sampler, and seed match. If you don't get similar results, either you messed up some of settings, or there's something wrong with your installation.
>>294 >>295 >thick robot cocks The fact that you both were thinking on robot cocks says a lot about you two. >>296 >Robot tails I was thinking the same, anon.
>>438 COCK ROBOT COCK
(988.93 KB 1024x1440 media_FjdBydKaUAEOCJ0.jpg)

(1.12 MB 1024x1536 media_FjUPC0yakAEn2iz.jpg)

(980.99 KB 1024x1536 media_FjZX1hbaMAAESoC.jpg)

(942.68 KB 1519x798 media_FjOshMnaEAMKY6n.jpg)

(1.01 MB 1152x1792 media_Fi-jerjacAA9wCx.jpg)

Someone here with Midjourney? I took these from Radio_Poodle account, this guy loves boats.
(654.70 KB 1000x1520 101152822_p0.jpg)

(666.38 KB 1000x1520 101152822_p1.jpg)

>>439 Robot cocks?
(471.29 KB 512x704 Please no.png)

>>428 To be honest I'm not really sure what to aim for in a sexy Krampus. But goat demons don't seem to work too well. In my usual models it's too fond of giving goats beards and awful teeth, and generally just gives hideous results.
>>442 Just make her an anime demon girl with goat horns, goat hooves for feet, and chains wrapped around her.
>>442 You know if you could fix the teeth she would be pretty alright.
(151.07 KB 976x549 shittyprompt.jpg)

>>3 A question for people who are using the AI: let's say that I want to use as prompt a (shitty) composite picture like this. Would NovelAI/the Danbooru AI/Stable Diffusion be able to take the picture and create a new one seamlessly or does it needs a lot of prompts? Because I've seen people using shitty self made drawings as prompts, but I don't have any definite answer. Additionally, how many people or objects can I put in the prompt picture before the AI just gives up?
>>445 >seamlessly Once an image is generated, some of the models let you do a lasso selection of the parts of the image that look incorrect or which you want to change, while leaving the rest. Then you just iterate through a few of its changes until you find one that fits correctly or which you like better. I think. That's what I've seen. The newest hardware in my computer is 8 years old, so fuck me if I'll ever be able to play around with this shit.
(2.45 MB 1920x1024 grid-0204.png)

(2.66 MB 1920x1024 grid-0205.png)

(2.51 MB 1920x1024 grid-0206.png)

(2.48 MB 1920x1024 grid-0207.png)

(2.54 MB 1920x1024 grid-0209.png)

>>445 <0. Crop your image down to 960x512 because the AI likes images at multiples of 64 pixels. <1. Put your image into img2 img and letting the AI do what it wants (no text in the prompts) Blurry mess, and subject isn't what in your pic. Let's fix the blurriness. <2. Added negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, wide hips, monochrome No longer blurry, but still isn't the subject. Let's describe it in the prompt. <3. Added positive prompt: a man holding a gun to a goldfish, with a lake and mountains in the background Much better. Let's see if we can tune it up <4. Changed positive prompt: a man holding a goldifhs and pointing gun at the goldfish, with a lake and mountains in the background Not a huge change, oh well. Now let's move on to putting Walter White into the prompt. <5. a walter white holding a goldfish and pointing gun at the goldfish, with a lake and mountains in the background Definitely on topic, but needs work (and yes, the AI is shit at guns). Next comes the hard part of iterating the prompt, generating fucktons of images, and/or using inpainting to get it just right. However, I'm too lazy to go further for this example, and the rest is left as an exercise for the reader. All the above was done using a BerryMixer model, Euler A, steps 20, seed 5, CFG scale 7, and denoise strength of 0.75 >create a new one seamlessly or does it needs a lot of prompts? You'll need some level of prompt to get image to do what you want. Reducing the denoising strength to something much lower will stop it being less creative and follow the source image more closely, but sloppiness in the creating of the source image might then shine through. >Additionally, how many people or objects can I put in the prompt picture before the AI just gives up? The AI won't give up per se, however, you do run into the problem of concepts bleeding across to other objects in the scene. Try to make just the shoes red? You might get the briefcase coming up red too. Try to make pregnant looking woman? It might make the man pregnant too.
(29.44 MB 4560x5734 xy_grid-0003-5.png)

>>445 >>447 Decided to do a quick follow up showing the effects of changing the denoising strength, using the final prompt.
(65.40 KB 711x761 meow spaghetti.jpg)

Stable-Diffusion utterly refuses to use my GPU even though it should technically work and defaults to CPU no matter what I try. Everything looks bad and takes a billion years to render. I am defeated.
>>449 Do you have CUDA?
(99.72 KB 1280x719 russian raccoon.jpg)

>>450 I installed some CUDA toolkit shit but it still doesn't work. It read that my GPU should support CUDA. I tried doing shit with Anaconda but it doesn't create menus or whatever, I tried an old version that installs for a thousand hours and it worked initially but then it refused to do the install git command. I tried manually install pytorch and didn't work either. I mean Im at the absolute bottom of spects with a 1050 ti and win 7 but it should be working. I checked drivers, installed Microsoft's spyware updates. I don't know anymore.
>>447 >>448 Thank you so much, anon. Obviously I made it as shitty as possible just as an example, I'd obviously go ahead and curate it better, but I'm satisfied with your answer. Speaking of, if /v/ decides it doesn't want to keep these threads up anymore, I made >>>/ais/ just in case (/ai/ was already taken and being used for some idol anime, didn't have the heart of stealing that name). >>451 If you're on wangblows, have you tried installing it through Spyder? I think there's a few tutorials on how to do it: https://archive.vn/RZUQM Also check if your GPU is compatible: https://developer.nvidia.com/cuda-gpus
>>447 Any more helpful advice to gtx1060 users like me?
(9.51 MB 2048x3072 grid-1396.png)

>>449 >>451 >CUDA trouble. I don't use Windows, so the depth of my advice is limited. Your first step is to trouble shoot your CUDA installation. Find some nvidia or third party tool that tests if CUDA works, and that it uses your GPU. If CUDA isn't working from this test you'll need to fix your CUDA installation, and possible upgrade or reinstall your nvidia drivers. If CUDA is working then something in PyTorch doesn't like your nvidia install. Go look at the PyTorch website for troubleshooting documentation and advice. >Everything looks bad and takes a billion years to render. If it's any consolation, it would have looked bad even with the GPU working. CPU and GPU should give very similar results for the same inputs. Making it look good depends on using a good prompt and settings. >>452 >Thank you so much, anon. Obviously I made it as shitty as possible just as an example, I'd obviously go ahead and curate it better, but I'm satisfied with your answer. Glad it was of use. >Speaking of, if /v/ decides it doesn't want to keep these threads up anymore, I made >>>/ais/ just in case I don't see these going away any time soon. Regardless, you'll want some rules and content if you don't want it just looked "parked". >>453 >Any more helpful advice to gtx1060 users like me? Have patience and learn to live with the limits of your VRAM? To be honest, the only real advice I can give for a 1060 is to read up on the trade offs between --lowvram and --medvram . Any other advice I can give would be generic to any set up. Here's some half rambling collection of tips I can give you from my experience. <Prompt crafting Odds are you'll spend a lot of time iterate a prompt to get it "just right". - Chose a model suitable for what you're doing. Pay attention to the "language" the model uses. For example: Stable Diffusion uses natural language; Waifu Diffusion and NovelAI use danbooru tags; and Yiffy uses e621 tags. Of course everything derived from Stable Diffusion inherits its natural language too. - Set your batch size as high as your VRAM limits (this will change based on your image size). - Set the number of batches per run as high as your patience allows. - Use a constant seed. Any number can be chosen, but you want to keep it constant. When you're trying craft the perfect prompt you want to be able to judge if your latest change has had an effect. If the seed is being randomized each run then you'll be left wondering if improvements or failures are due to prompt changes, or luck of the dice roll. - Use brackets "()" to emphasis things that are import, or add them if it's not giving them enough attention. You can nest brackets for extra emphasis, or use the ":" number notation. "(some phrase)" = "(some phrase:1.1)", "((some phrase))" = "(some phrase:1.21)" and "(((some phrase)))" = "(some phrase:1.331)" - I generally use a limit of "(some phrase:1.3)" for emphasis. Going much over this results in over emphasis and it trying to inject the phrase everywhere it can vaguely fit. However, if you do 1.4 and over, but don't see over emphasis, one of two things is happening: the phrase has little or no effect, or other things in your prompt are counteracting this phrase. - Try to keep your adjective immediately before the subject of the adjective. This can sometimes help reduce the amount the adjective bleeds over into other things in the picture but it's not a guarantee. - Try to use synonyms and related words if a word doesn't appear to be doing anything, or not enough. - I typically start a prompt with a short phrase or sentence describing what I want in the picture and then add "," separated phrases and words to fill in details. - Putting something in the background write "with X in the background" e.g. "with a busy market street at night in the background" - Remember to use the negative prompt to get rid of or reduce unwanted details. Ladies with their legs wide open in the wholesome picture your trying to make? Use "spread legs" in the negative, or maybe "cameltoe". - Simple smiles are too much? Try using adjectives on your smiles. Or be specific about expression or mood. <inpainting - Use "inpaint at full resolution". The will take your masked area, plus some padding, and enlarge it to your full image size. It will then inpaint that. The output is then scaled back down and pasted into the original image. This can work well for repainting fiddly details like hands. Doesn't work well if the outer bounds of your mask encompass most of the image. Sometimes zooming in too much stops it from working well. - If using an image editor, try to make things look less wrong before doing a roll with inpaint. - Start with the same prompt used to create the image, but don't be afraid to change it if necessary. <misc - Use X/Y plot script to experiment with the effects of changing parameters. Can help you get a feel with what the knobs do, while putting it in a nice chart. - Forgot what prompt you used in a past image? By default this information. Use PNGinfo tab to look at the prompt and settings. (Sadly 8chan's software strips this information from PNGs.) - Look at what other people's prompts to see how they've achieved what they did.
[Expand Post]Settings for the image batch (using a Berry Mixer model): a catgirl wearing power armor sitting in a helicopter cockpit, with a cityscape at night in the background, red hair, flirty smile Negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, monochrome Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 5, Size: 512x768, Model hash: 579c005f, Batch size: 8, Batch pos: 0
(1.06 MB 1100x1400 Xmas Shantae by the Window.jpg)

>>414 >>7 Some hot cocoa with two big soft marshmallows on a cold December.
>>455 I request more Shantea.
>>456 >Shantea There are a dozen images of Shantae ITT already btw: >>191 >>192 >>195 >>202
>>457 Are you saying more shantae is a bad thing?
>>458 Yes, Shantae a shit
(928.58 KB 768x1152 Shantae washing dog.png)

(873.22 KB 768x1152 Shantae sweaty.png)

(887.62 KB 768x1152 Shantae sweaty 2.png)

(1.43 MB 768x1152 Shantae unbuttoned.png)

(1.59 MB 768x1152 Shantae lingerie.png)

>>458 Naw, just pointing out any if you missed 'em. Here are some reposts also in case you missed those.
>>459 You're shit. >>460 I like that first one & last one.
(261.44 KB 512x512 pjNQRPQ.png)

>>454 >CPU and GPU should give very similar results for the same inputs. The Asuka troubleshoot outright states the GPU and CPU get completely different results. Either way I decided to just buy a new card for christmas (I was already considering getting even before my misadventure with SD) and downgrade to win 10. If someone has a reliable win 10 debloat guide I'd appreciate it.
(1.00 MB 320x336 1404225964500.gif)

(15.76 KB 200x282 super salty fishy pussy.jpg)

>>460 >Sweaty fit shantae >UNLIMITED SWEATY FIT SHANTAE
>>414 >>455 >>7 Another.
>>466 I'd like to open her present.
>>751737 Ew, Those faces are little 3DPD.
(440.78 KB 512x704 Raven santa.png)

(548.62 KB 512x704 Owl santa.png)

(455.25 KB 512x704 Santa jay.png)

(2.38 MB 2048x704 Latest promt result.png)

>>414 >>7 Christmas themed titty birds, also some more roughly done sorceress just because i can't stop doing them.
(91.87 KB 1200x675 249822.jpg)

>>460 My man of fantastic taste and craft
>>469 These ai arts will tank the furry art economy
>>469 Nice bird tits. >>471 unlikely people are fucking retarded.
>>471 You've got to consider furfag culture. A lot of them are the types that would hate the way AI """stole""" the art it trained on. And remember that at least half the commission market is based on social status; it's at least as much about who draws the fursona as the fursona getting drawn. A shitty artist who's famous from "community" presence will generally be preferred over a technically far superior no-name (or AI). Most furry sites have already banned it. Aside from the AI-specific booru, the one place I know of that does allow AI art (Inkbunny) requires you to post the full prompt, including every step of inpainting, and requires the use of open models (i.e. NAI-furry is banned, since it's not only proprietary but hasn't even been leaked). It also requires the only artists allowed to be used in prompts are those who have been dead more than 70 years such that their work is public domain, which mostly cripples the public models like Yiffy, since they usually need artists to guide anatomical styles in the right direction.
>>471 It might make a dent in the artistically specific porn scenario niche, that and it might convince the wonder bread guy to stop maxing out credit cards.
(32.92 KB 600x594 12d-1575316755.jpg)

>>473 >(Inkbunny) requires you to post the full prompt, including every step of inpainting, and requires the use of open models (i.e. NAI-furry is banned, What kind of autism hell is that? Do these people consider themselves the art aristocrats who need to preserve the sanctity of furry porn, purge the incorrect/infidel ways, so that the furry "art" "community" maintains its status quo?
(517.23 KB 512x704 Tiddies.png)

>>475 Yes.
>>475 you answered your own question m8
>>475 Yes.
<<751781((you)) >>478 atleast this is understandable, fags would just post normal hentai with one furry in a corner. Or they could just remove them. >>474 I think with some generic posing ones you'll see a dip in commisioning, then rise in incredibly specific poses and details >>466 Can AI be used in imposing hats for the pinned santa thread?
(66.40 KB 640x635 Fursecution.jpg)

>>478 I can't believe what I'm reading, what fucking site is this?
>>480 >what fucking site is this? Inkcunny.
>>479 >Can AI be used in imposing hats for the pinned santa thread? Sure. You can photoshop a hat then use inpainting with low to moderate denoising strength to blend it better with the rest of an image OR you can use inpainting with high denoising strength to try to get it to make a hat from nothing but that's more challenging.
(159.16 KB 300x305 HUUH OVERDRIVE.png)

>>476 >>473 >>469 BIG FAT FLUFFY TITS
>>469 It's sad that these character designs are better than the slew of horrible "canid with pastel colored fur" type trash you typically see.
(1.32 MB 594x5608 ChatGPT AI is Communist.png)

>>227 >>380 >>381 Reminder that Chat GPT is Communist. Not openly Communist. The AI tirelessly claims to be neutral while espousing Trotsky ideology so it's more of the (((subversive))) type which attempts to mask itself, pretend its worldview is rooted in objective reality rather than subjective delusions that were fed to it by its commie developers.
>>484 I've noticed that AI does certain things better than the vast majority of artists. It does sweaty fit and muscle girls better than all but the best in the industry. You need to look to the Hozuki Akira tier artists to find better anatomy (something I was told AI was shit at) and understanding of how light reflects off sweaty skin.
(30.18 KB 329x313 xj9 stare.jpg)

>>485 >The 70 questions 8 values quiz <I made it take a quiz that's made by leftists and heavily biased towards leftism, and it came out super leftist Imagine my shock.
>>486 Now that you mention, I do notice there tends to be a lack of those ugly boil/sunspots that dumbfuck anime artists always put on characters because they want them to be shiny for some fucking reason. It's strange. Why isn't the AI replicating this common practice?
>>487 Yeah, many of the questions in that thing are extremely ambiguous. You have to try to guess what orientation the author had in order to answer them correctly, which is the exact opposite of how a working "values test" operates.
>>485 That entire thing is bullshit. It basically ask "do you like kicking puppies? If no, then you're a leftist." in order to sweep a broad brush of see we're all leftist, we like people and puppies.
>>490 That makes sense. Personally I fucking hate puppies, and people. Abd coincidentally these tests always say im an extremist nazi.
(1.54 MB 1024x1536 00001-317503063.0.png)

>>486 I probably ought to have removed some of the less sensible sweat drips, like by her right arm, before upscaling. But I don't think I kept the original and it's not worth reconstructing just for that. >>492 >rolling Those are heavily inpainted, not just rolled one-off. You could roll until your GPU died and you wouldn't get images that look like that.
>>487 >>489 >>490 >>491 >>489 Look at the responses instead of attacking the test. The Chat GPT AI pushes open borders, gun restrictions, hatred of tradition / religion, one world government, fag marriage, DIEversity, endless migration, etc. A bottomless spring of destructive bullshit and you're going to pretend these are not the responses it gave? It could easily have pushed back against any of these things, but rather expressed support for them in lengthy paragraphs. Many AI are not like this unless they are lobotomized from discerning patterns in group behavior, but this one appears to have been TRAINED to spew the same tripe Reddit NPCs are programmed with.
(15.90 KB 474x453 02285733968514522.jpeg)

>>494 so wat usayin is some dude have 2 like actually program an program? man wat is this like some super ai or smthing sheeeeit ai be crazy
>>476 I'm curious what your prompt to this was.
>>473 SoFurry has been quiet on AI art so far even with some of the front-page popular art uploads being populated with them.
Unrelated, but do any of you anons have the weights of a ChatGPT-like text AI that isn't censored? And speaking of, I recall there being some sort of ai scraper that could take code from open source projects to help you fix your own, do you guys remember what it is?
(326.74 KB 512x512 asuka blyat.png)

been trying to figure out why my computer's fucking up the asuka test, i looked at the general troubleshooting and the closest thing i could see was that it said i had "v2" installed and running, but i don't
>>499 I have v2 installed and I still get the correct result. Uncheck 'filter nsfw' in the settings Make sure clip skip is on 2 and generate it using 'Euler'. Also check if the weights are working. Make sure the .vae.pt has the same file name and restart your stable diffusion after all the changes have been made.
>>500 yep, did all that, still got the same result. the only thing i can think of is that maybe the weights got corrupted mid-download or something, who knows
>>499 pickled'
>>502 that can't be right either, i ran all this fuckin' shit through a pickle scanner before even touching it
New Automatic1111 simplified installer: https://github.com/EmpireMediaScience/A1111-Web-UI-Installer Windows 10/11 only. If it works it should replace the 1-click installer included in the OP, since Automatic1111 is the superior UI and people keep having trouble with that one anyway.
>>504 >>237 >>301 >>302 I'd hoped the video guide wouldn't be made obsolete so soon. The only issue with the Simplified Installer is that you only have one option for the AI model. Would be nice if you could choose from a list of models since far more models exist than just sd-v1.5 and depending on what you're making I wouldn't even call that the best one.
>tfw accidently make the best damn furry model out there >have no fucking clue what i did >not even a furfag So how do i get into furfag commisions? If i'm ahead of the curve may aswell profit.
(9.76 KB 200x303 doubt.jpg)

(8.38 KB 170x250 doubt.jpg)

(23.26 KB 133x234 seriously.jpg)

(37.04 KB 344x505 really now.jpg)

(259.04 KB 183x183 al_nope.gif)

>>506 >put time and effort into curating a refined furry model >claim not to be a furfag
(1.07 MB 1280x720 That means you're gay.webm)

>>506 >>not even a furfag
Guess i'll just delete it. Still have no clue how i got it from random mergers but w/e
>>509 >Guess i'll just delete it. Why?
(548.00 KB 576x768 tmpf8tq82p3.png)

>>510 Random result from me experimenting with merging 3dpd models to fix pose/anatomy issues in pure 2d models. aswell as me attempting to find a solution to overfitting that causes uncanny results from such 3dpd/2d mixtures. I was using the 3d/2d mixtures with both another 3d/2d mixture to sort of merge data from one model to the next without ruining image integrety. Since it was unintended there was no reason for me to keep it. Only reason i would've kept it is maybe would've profited from it's results. pic somewhat related (not the furfag model), made it using an anyv3+elysm+f111+cafe mixture and a custom ti for better bust weighting, probably don't need to mix so much to get good anime results but i wanted to see different permutations using similar seeds anyway I found out it did furshit because when i weighted "ram horns" high enough on the same settings as picrel it generated furshit.
>>511 If you're not planning on using it, why not release it into the wild and accelerate the demise of the furry commission industry?
Heads up! The Automatic1111 dev is getting egged on and apparently temporarily banned from github because of the usual no edgy shit allowed policies, back up what you want before he gets taken down again.
Also, in a similar way to the drawthreads, could it be possible to port this thread over to >>>/ais/ ?
>>514 Just to be clear, are you requesting that threads be moved to /ais/ once they're bumplocked to keep them as an archive, or are you requesting that they be mirrored? The first is possible and the second is, at the moment, not.
>>515 The first option, just like what they're doing for drawthreads and GG threads
No new ai art posted in 2.5 weeks. No new AI posted this year. The AI meme is dead.
So when is someone going to make the next thread?
Sorry to interrupt, but I'd like the BO of >>>/ais/ to respond to this post using his capcode: >>>/ais/1.
Alright, /ais/' BO responded to the admin post stating that it was OK to have AI threads moved to that board. I'll move this thread as soon as somebody makes a new one.
>>520 New bread: >>765010


Forms
Delete
Report
Quick Reply