/hdg/ - Stable Diffusion

Anime girls generated with AI

Catalog Archive
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

It's kinda shy a little more on nailing lewd concept compared to other 2. Based65 adding mole on her tiddies, kinda lewd meanwhile based66 adding additional blue tint and based64 just makes zurisussy which is sexo. I feel based66 reaching anything domain instead, which isn't really all that bad.
>>16259 Yeah realized counterfeit is a really dicey model when it comes to mixing and all versions of defmix was using it too. 3.0 is going to make a lot of model mixes uploaded to normie sites annoying with LORAs if they don't know how to do the right block merges to avoid the LORA troubles it has.
(2.94 MB 1280x1920 catbox_e8k9z3.png)

(3.09 MB 1280x1920 catbox_zgrzti.png)

(2.68 MB 1280x1920 catbox_r6vvpn.png)

(2.81 MB 1280x1920 catbox_h2mbpj.png)

Finally got around to releasing the Arona + Plana model I was working on for a while now (tagging was hell because I put them both in the same model, and opted to do a "combo" folder) but it seems to work pretty well. it has it's quirks, but it's flexible for the most part and can do both characters perfectly most of the time. I included a readme in the mega drive, civitai, and the model metadata itself. I know some people were asking me to make Plana back when I made Shun + Shunny, but I only really had time every so often to work on it between working on the UI stuff and other projects. hopefully they are still around. anyways, links for people to download: https://mega.nz/folder/CEwWDADZ#qzQPU8zj7Bf_j3sp_UeiqQ/folder/qUpFmABI civit: https://civitai.com/models/60885/arona-and-plana-or-blue-archive-or-character-locon hope you enjoy using it, I know I enjoyed making it (despite the tagging nightmare)

Anime Screenshot Pipeline for Building Datasets Anonymous 01/29/2023 (Sun) 00:44:25 No. 1209 [Reply]
This was mentioned a couple times on the main general, and while its pretty messy, some of the pieces on their own do work, just nothing to make it a straight automated pipeline or even something hassle free to run that is turn key at the moment. I figured that since everyone here is more less more determined and committed to the craft, that maybe we could get some of the best minds, and a little push from ChatGPT, to get this working to help streamline the process of turning anime episodes into datasets. https://github.com/cyber-meow/anime_screenshot_pipeline Let me provide some of my notes and observations from what I have done so far: With frame extraction, as stated in the github, you are turning a 24 minute animation of about 34k frames and condensing it to an average of 4k/6k/9k non-frozen/dead frames, depending on the show, episode, studio, or era of said source. The work is being done by ffmpeg's `mpdecimate` which's purpose is to "drop frames that do not differ greatly from the previous frame in order to reduce frame rate." The frame extraction command with ffmpeg provided in the github works fine, the issue is that git maker's bulk file script, `extract_frames.py `, doesn't play nice and only produces the folders while the ffmpeg script fails to execute. I did consider that video file syntax could possibly be a culprit for the script to function based on some previous errors I ran into, but it's not an issue running ffmpeg so I side stepped the bulk script. Since I already compiled the datasets I'm currently working on from manually running the command, I haven't had the need to go back and retry the script with any modifications. ChatGPT did offer some suggestions, but required me to provide a copy of the output to review which I no longer had and didn't have time to go and reproduce. Similar Image Removal, the base application running the filter is called `FiftyOne`, a "computer vision model" used for collecting databases, with its recent use being to build clean visual databases for vehicle autopilot AI to use. Using `remove_similar.ipynb` in Jupyter Notebook, a second round of filtering that will remove duplicate, very similar frames of a certain threshold, across the entire dataset, instead of just the sequential frames of mpdecimate. This would be cases when the animation is stretched out during talking scenes where only the mouth moves, standing shots where the camera isn't being panned, etc. The script has a default threshold of `0.985` value of what is considered a duplicate, but I've noticed that even at this value some frames were considered duplicates and purged that shouldn't have been but that's what manual review is if you need that higher accuracy in a dataset. The main issue I ran with this was that with my dataset (could be a personal issue), the process would be painfully slow at 1 sample/s read on the duplicate image detection Notebook script. That's one and half hours sorting through a 24 minute episode worth of already filtered frames. Through some trial and error and ChatGPT QA, I found that switching the model used in the script provided much faster results. If you want to test your luck, switch out the following in Cell 2: `model = foz.load_zoo_model("mobilenet-v2-imagenet-torch")`

Message too long. Click here to view full text.

5 posts omitted.
>>10869 alright got it, thanks I thought that python script was working
>>10874 That's the only one to my knowledge that doesn't work as intended that didn't have a fix addressed in the OP or the second post.
>>1209 By the way, catbox plz?

/hdg/ #12 Anonymous 04/04/2023 (Tue) 17:34:49 No. 13776 [Reply] [Last]
Previous thread >>12556
1197 posts and 696 images omitted.
>>14985 >[medium breasts:large breasts] Correct syntax for what you're trying to do here would use | instead of :
>>13851 there's a good lora for it here, number 182 https://seesaawiki.jp/nai_ch/d/Lora%b3%d8%bd%ac%c0%ae%b2%cc#content_1_7_121 works best using the readme's tags and combined with a good artist lora for it, like nora higuma v2
>>14997 holy shit thanks, looking through all of these now.

hdg/ #11 Anonymous 03/24/2023 (Fri) 14:07:09 No. 12556 [Reply] [Last]
Previous thread >>11276
1197 posts and 741 images omitted.
>>13772 I wonder what Kohakublueleaf's method was even doing differently compared to the traditional block method then, what did their convdim/convalpha influence along with the method Kohya used?
>>13773 I honestly have no clue, I didn't exactly look too far into it.
>>13649 No good free option yet and paid ones watermark their files to possible hunt you down later.

[ 12 ]
Forms
Delete
Report