/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.66 MB 4000x2715 shutterstock_89245327.jpg)

Parsing scripts Anonymous 11/14/2016 (Mon) 18:14:13 Id: f047d8 No. 4475
How about a thread for discussing/creating/sharing parsing scripts? I made one for md5 lookup on e621.net (actually I just modified Hydrus_dev's danbooru script). Let me know if I did anything wrong with it, I'm pretty clueless… but it seems to work fine.
[32, "e621 md5", 1, ["http://e621.net/post/show", 0, 1, 1, "md5", {}, [[30, 1, ["we got sent back to main gallery page -- title test", 8, [27, 1, [[["head", {}, 0], ["title", {}, 0]], null]], [true, true, "Image List"]]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-general"}, null], ["a", {}, 1]], null]], ""]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-copyright"}, null], ["a", {}, 1]], null]], "series"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-artist"}, null], ["a", {}, 1]], null]], "creator"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-character"}, null], ["a", {}, 1]], null]], "character"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-species"}, null], ["a", {}, 1]], null]], "species"]], [30, 1, ["we got sent back to main gallery page -- page links exist", 8, [27, 1, [[["div", {}, null]], "class"]], [true, true, "pagination"]]]]]]
I installed and tried using iqdb_tagger but it complains that the 'hydra-python-core' distribution was not found and is required by hydrus. What gives?
(60.41 KB 1151x616 concern.jpg)

Has Pixiv parsing stopped working for anyone else recently?
>>13042 What do you mean? There was no parser for pixiv. If you mean those extensions that let you direct load the images then those have broken for 1 year+ since pixiv keeps editing its sites
(22.17 KB 889x277 W2bsGlw.png)

>>13044 I might have found a custom set from CuddleBear92's GitHub repo (I sure as fuck didnt write them) but I had been reliably importing pixiv urls just days ago and now they error out; can't find anything. I havent looked into it too hard yet but was wondering if I'm alone
>>13045 I think it's just you; (I'm using the Hydrus default pixiv parser) I made my 32 artist subs check now and they went through with no errors. But they had already checked recently so there weren't any files to snag. No idea why it would work yesterday but not today, unless that was made before they revised their site and they just happened to leave the old code running as a fallback till now.
The built in script for using iqdb to look up tags from danbooru works for me. There are many more like it on cuddlebear92's website, but they are 2 years old and don't seem to work at all anymore. I just want something that works the same as the built in function for other sites like sancom, gelbooru, etc., but it seems I'm left high and dry. I don't understand why it doesn't work anymore either. I went through the logic of the iqdb gelbooru script, for instance, and compared it with the HTML actually sent back by the website, the logic still seems sound.
>>13042 Not just you, same happened to me on both the default hydrus parsers and the custom pixiv all-in-one set. Everything gets ignored and has been for a couple of days. All the pages come in as if I'm not logged in for whatever reason
>>13052 Hmm, I've found that the gelbooru one actually works off and on. Sometimes it oddly just returns a list with 4 crosses, instead of a list of actual tags though. Now then, what I'd really like to do is automate running file look up scripts on more than one file and automatically apply all tags to each file. There doesn't seem to be away to do this through the interface when more than one file is selected, but there has to be a way, right?
Hit the same pixiv issue just now. The login itself doesn't seem to be the issue, I reset and redid the login within Hydrus but that seems to have changed nothing.
Pixiv changed their API so the parser had to be redone. You can replace the old one with this one or wait until Wednesday as it should be in the next release. Also pixiv added captcha to login so you have to import cookies manually now. The login in hydrus won't work.
(3.58 KB 512x111 newsankakuparser.png)

The sankaku parser someone posted on this board that was supposed to remove the 2000 files limit didn't work properly for me, due to the naive way the parser fetched the next gallery page data I think, so I made a fix some while ago that works on my machine (TM). Please let me know if it works on yours, too.
>>13138 Working a treat right now. I understand a bit of html, but these parsers make no sense to me. Maybe I'll sit down and spend time to figure out how to do this myself sometime.
(3.63 KB 512x113 8kun downloader.png)

I'm not sure if this has been fixed yet, but I modified the default 8ch parsers to allow hydrus to download 8kun threads with filenames.
The JSON API for boards like gelbooru returns all the tags, as well as the path to the files, hash, source, updated time, etc. Example https://gelbooru.com/index.php?page=dapi&json=1&s=post&q=index&limit=50&tags=cat%20rating:safe&pid=2 (The tags are HTML-escaped, but I don't know about other entries) So why do the gallery downloaders scrape HTML for each page instead of using all the information obtained from a search request? If I do a search for a set of tags, the downloader has to download the HTML for every single post's page just to check for duplicates and tags. It's a lot of wasted resources/effort for both client and server. If I already have all 50 files that turn up in the linked search, in total I did 1 request instead of 51 to verify that. Similarly, if I had to download all the images, in total it was 51 requests instead of 101, with the bonus that no HTML scraping had to be performed.
I noticed gelbooru's JSON API returns tags as a single string with each tag delimited by spaces. Is there a way to split a JSON string match into multiple entries?
So I'm a newcomer to making downloaders, I made a bunch of url classes, such as for an HTML page of an album that contains many images, it redirects to an API call, which also has it's own class, I made parsers for the API response, selected which API query element corresponds to the next page (such as offset) and even added a next page URL in the parser. But no matter what I do, when I drag & drop an album's URL into Hydrus, it only downloads the first page worth of images and never goes further. Is it supposed to work like that? Do I have make something like GUG to make the continuous downloading work?
(2.71 KB 512x112 e621_updated_parser.png)

Friendly neighborhood anon here - e621 seems to have added 'lore' and 'meta' tagtypes which the default parser can't catch - this updated parser can catch them.
I previously used a modified version of saucenao's generic script to automatically(-ish) reverse image search untagged images that show up, but now that e621 has their own reverse search, I whipped up my own python script. e621's reverse search also doesn't have a cap on searches done in 30s/24hr (it does require an account tho). https://gist.github.com/corposim/b7ccb6a2c8814032ddd65db91b371dc2
(3.17 KB 512x113 babeswp_docl.png)

wasted my time on this
This might have been asked before, but is there a downloader for NicoSeiga? If not, does anybody know other tools for that?
I'm trying to get Hydrus to download from smugloli.net. I have made url classes that match the URL and created an API URL for the json, but when I try to watch a thread it instantly says "DEAD" with the log message saying there was no parser. It should work if the "4chan-style API parser" is used, but I have no clue how to make it use that.
Anyone know what the situation is with gfycat redirecting NSFW content to some sort of sister site? I guess they intend for you to browse their new site "redgifs" but following old nsfw gfycat links takes me to "gifdeliverynetwork" Anyway in short I got some sort of gfycat/redgifs downloader bundle from cuddlebear's hydrus scripts git repo but I'm not really sure what to do with them and I can't download videos straight from redgifs like I used to with gfy, anyone else in a similar spot?
With the number of artists attempting to migrate to pillowfort from twitter, I tried my hand at building something to parse pillowfort posts. It could probably still use some cleanup and correction, but figured it was worth putting out there since I've gotten it to work for me pretty well so far.
(3.63 KB 512x112 realbooru.png)

here's an updated realbooru downloader; includes a gug, post and gallery urls and a parser tags work well.
Can the nijie parser download video and manga? It doesn't look like it from what I saw, but I may have missed a step. While I'm asking, How would I automatically fetch the nijie work:# ?
(3.85 KB 512x108 agnph_all_in_one.png)

Friendly neighborhood anon here. Someone once asked for an agn.ph downloader. This is an all-in-one that should work for the site.
is there a parser for the FA Onion Archive?
anything for rule34hentai?
(14.14 KB 512x133 docl_instagram.png)

^wrong one, here is the one that works, tagging kindaaa works but location tags are busted (instagram)
>>14437 I think this has changed again, I'll give it a look but I am not good at it at all.
>>14710 Anything on this anon?
(8.35 KB 512x166 realbooru.png)

realbooru parser that functions at least
Sankaku is now hiding lolis. Is there some way to get around this?
I'm not sure if GUGs can make these, but anyone have a module for setting up Youtube subscriptions?
>>14782 They're not hiding lolis. I don't understand why I keep hearing this. Did you check the mature content option in settings and clear your account blacklist? Do you have an account in the first place?
Can someone help me understand what parsing scripts are for, and how to use them? Are they to improve the amount of tags that are found for images? Like a reverse search?


Forms
Delete
Report
Quick Reply