Catalog of /hdg/

Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 15.00 MB

Total max file size: 50.00 MB

Max files: 4

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Search:
Open

R: 0 / I: 0 / P: 1

Rules

Make sure to familiarize yourself with the global rules! Board rules can be decided and established if people like this. Also give a check to the rulings on edge cases, specifically these two. https://8chan.moe/site/res/4481.html#q4482 https://8chan.moe/site/res/4481.html#q4483 Loli is fine as long as you stick to the rules of the site and keep it strictly anime only since the site has zero tolerance on anything realistic, but it does not have to be only loli, other anime girls are fine too. And keep the content of your posts to be about Stable Diffusion, characters generated with it or discussion related to SD. So please remember to stay away from realistic tags, any tags that might add realism to your pictures or any models that might add realism to your pictures. Any picture that is too realistic, is based on real living people or is a picture of someone is subject to be deleted immediately and the user posting it can be banned on the spot. Remember, furry and western art styles are not allowed here. 'Western' refers to cartoony art styles, realistic art styles or art styles used or often seen in the western part of the world and not where the artist or franchise is originally from. An artist can be from Japan and draw in a western or cartoony art style, while an artist can also be from the United States and draw using eastern or anime art styles, so please keep in mind this board was made to be focused on anime-related things. Technology and code-related discussions are okay and allowed. These discussions further our understanding of Stable Diffusion and help others improve this technology or help others make better pictures, so feel free to discuss this too.

R: 105 / I: 117 / P: 1

/hdg/ #24 vpred edition

R: 20 / I: 22 / P: 1

funny gens

let's give it up for the computer, folks

R: 295 / I: 351 / P: 1

h/d/g thread

Some anon mentioned starting one thread for soft /d/ stuff but since he didn't do it guess i'll do it myself.

R: 1145 / I: 1282 / P: 1

/hdg/ #23 We're so back edition

R: 1101 / I: 1027 / P: 1

/hdg/ #22 Wife Edition

R: 1161 / I: 1066 / P: 1

/hdg/ #21 Cute Maid Feet Edition

R: 1022 / I: 1191 / P: 1

/hdg/ #20 Wedding Night Edition

R: 1098 / I: 938 / P: 1

/hdg/ #19

R: 1095 / I: 783 / P: 1

/hdg/ #18

R: 1478 / I: 667 / P: 1

/hdg/ #17

Previous thread >>18739

R: 1200 / I: 411 / P: 1

/hdg/ #16

Previous thread >>17518

R: 300 / I: 27 / P: 1

/hdg/ pixiv\twitter share thread

Share your pixiv or twitter here if you're an /hdg/ anon that made a pixiv\twitter to upload their generated work. Share techniques or feedback to other anon's works if you want to help them out >Support our based /hdg/ brothers, cunny or no cunny

R: 1200 / I: 941 / P: 1

/hdg/ #15

Previous thread >>16264

R: 1200 / I: 580 / P: 1

/hdg/ #14

Previous thread >>14999
Open

R: 21 / I: 9 / P: 1

Is anyone using the latest master? What do you guys think? On the fence about updating

R: 1199 / I: 806 / P: 2

/hdg/ #13

Previous thread >>13776

R: 8 / I: 0 / P: 2

Anime Screenshot Pipeline for Building Datasets

This was mentioned a couple times on the main general, and while its pretty messy, some of the pieces on their own do work, just nothing to make it a straight automated pipeline or even something hassle free to run that is turn key at the moment. I figured that since everyone here is more less more determined and committed to the craft, that maybe we could get some of the best minds, and a little push from ChatGPT, to get this working to help streamline the process of turning anime episodes into datasets. https://github.com/cyber-meow/anime_screenshot_pipeline Let me provide some of my notes and observations from what I have done so far: With frame extraction, as stated in the github, you are turning a 24 minute animation of about 34k frames and condensing it to an average of 4k/6k/9k non-frozen/dead frames, depending on the show, episode, studio, or era of said source. The work is being done by ffmpeg's `mpdecimate` which's purpose is to "drop frames that do not differ greatly from the previous frame in order to reduce frame rate." The frame extraction command with ffmpeg provided in the github works fine, the issue is that git maker's bulk file script, `extract_frames.py `, doesn't play nice and only produces the folders while the ffmpeg script fails to execute. I did consider that video file syntax could possibly be a culprit for the script to function based on some previous errors I ran into, but it's not an issue running ffmpeg so I side stepped the bulk script. Since I already compiled the datasets I'm currently working on from manually running the command, I haven't had the need to go back and retry the script with any modifications. ChatGPT did offer some suggestions, but required me to provide a copy of the output to review which I no longer had and didn't have time to go and reproduce. Similar Image Removal, the base application running the filter is called `FiftyOne`, a "computer vision model" used for collecting databases, with its recent use being to build clean visual databases for vehicle autopilot AI to use. Using `remove_similar.ipynb` in Jupyter Notebook, a second round of filtering that will remove duplicate, very similar frames of a certain threshold, across the entire dataset, instead of just the sequential frames of mpdecimate. This would be cases when the animation is stretched out during talking scenes where only the mouth moves, standing shots where the camera isn't being panned, etc. The script has a default threshold of `0.985` value of what is considered a duplicate, but I've noticed that even at this value some frames were considered duplicates and purged that shouldn't have been but that's what manual review is if you need that higher accuracy in a dataset. The main issue I ran with this was that with my dataset (could be a personal issue), the process would be painfully slow at 1 sample/s read on the duplicate image detection Notebook script. That's one and half hours sorting through a 24 minute episode worth of already filtered frames. Through some trial and error and ChatGPT QA, I found that switching the model used in the script provided much faster results. If you want to test your luck, switch out the following in Cell 2: `model = foz.load_zoo_model("mobilenet-v2-imagenet-torch")` with `model = foz.load_zoo_model("alexnet-imagenet-torch") ` I was getting 4.9~5.1 samples/s, or roughly 15 minutes per episode after the adjustment. a 5x improvement of speed. Other models can be found on: https://docs.voxel51.com/user_guide/model_zoo/models.html The recommendation was to stick to "imagenet" models but feel free to explore. The github recommends 2 other alternatives for this task, but I have not checked them out myself. https://github.com/ryanfwy/image-similarity https://github.com/ChsHub/SSIM-PIL I haven't proceeded further than this because I had a bit of an issue installing the face detector until just recently. The github links additional documentation on setting up the face detection as well as other commands by kohya_ss, the same as the SD-Script maker, that would just need to be DeepL'd for us English onlys. https://github.com/hysts/anime-face-detector https://note.com/kohya_ss/n/nad3bce9a3622 The Face detection also includes regularization instructions, includeing rotating the face images in proper orientation for training. Tagging is being done with wd-1-4-vit Face Detection can be trained on the subjects which I assume is for an automated filtering process and for later dreambooth weight calculations. From there the rest is a bit of blurr. I admittedly m not as sharp to go through this on my own so I am kind of asking for help but I felt that not providing some sort of primer with fixes before doing so would be rude. And hopefully this would help everyone that's trying to build up Lora or even model datasets.

R: 1200 / I: 696 / P: 2

/hdg/ #12

Previous thread >>12556

R: 1200 / I: 741 / P: 2

hdg/ #11

Previous thread >>11276