/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.66 MB 4000x2715 shutterstock_89245327.jpg)

Parsing scripts Anonymous 11/14/2016 (Mon) 18:14:13 Id: f047d8 No. 4475
How about a thread for discussing/creating/sharing parsing scripts? I made one for md5 lookup on e621.net (actually I just modified Hydrus_dev's danbooru script). Let me know if I did anything wrong with it, I'm pretty clueless… but it seems to work fine.
[32, "e621 md5", 1, ["http://e621.net/post/show", 0, 1, 1, "md5", {}, [[30, 1, ["we got sent back to main gallery page -- title test", 8, [27, 1, [[["head", {}, 0], ["title", {}, 0]], null]], [true, true, "Image List"]]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-general"}, null], ["a", {}, 1]], null]], ""]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-copyright"}, null], ["a", {}, 1]], null]], "series"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-artist"}, null], ["a", {}, 1]], null]], "creator"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-character"}, null], ["a", {}, 1]], null]], "character"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-species"}, null], ["a", {}, 1]], null]], "species"]], [30, 1, ["we got sent back to main gallery page -- page links exist", 8, [27, 1, [[["div", {}, null]], "class"]], [true, true, "pagination"]]]]]]
Oops, looks like I spoke too soon. My e621 script only works if the file actually exists on the site, if it doesn't it appears that the e621 API sends an error code via the HTTP status code and that makes Hydrus think the script failed, resulting in an error message pop-up. I don't think you can set Hydrus to ignore error messages at the moment so my script is useless. Anyone know how to fix?
Here's a script for md5 lookup on rule34.xxx
[32, "rule34.xxx md5", 1, ["http://rule34.xxx/index.php", 0, 1, 1, "md5", {"s": "list", "page": "post"}, [[30, 1, ["pagination found", 8, [27, 1, [[["body", {}, 0], ["div", {"id": "content"}, 0], ["div", {}, 0]], "id"]], [true, true, "post-list"]]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-general"}, null], ["a", {}, 0]], null]], ""]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-copyright"}, null], ["a", {}, 0]], null]], "series"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-artist"}, null], ["a", {}, 0]], null]], "creator"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-character"}, null], ["a", {}, 0]], null]], "character"]]]]]
>>4475 >>4476 I'll check this out tomorrow and see if I can sensibly catch the error code!
I've set it to catch and mention 404s without a fuss and more loudly report and record other network errors at the script or link node levels. Let me know if it doesn't work for you!
(2.50 KB 512x84 dbmd5.png)

>>4492 Thank you for this update, my e621 script works well now. However, I have a Danbooru script which is still throwing network errors when the md5 isn't found on Danbooru. I am not sure how to check which error code it is returning or if there is something else wrong so I'll include it here and perhaps you can check.
>>4526 Thank you for this new example. Danbooru is giving 500 (Server Error) when it fails the md5 lookup. I suspect this is an unintentional generic 'something went wrong' error on their side, as 404 seems more appropriate, but I guess the json->html forward makes it more complicated. I will create a job to extend scripts to allow interpreting certain http status errors as standard veto conditions.
>>4539 That sounds good, thank you. Some other sites also have the same error, not sure if they also return 500, so the ability to catch specific http errors as veto conditions would be nice.
(2.23 KB 512x84 e621 md5.png)

(2.73 KB 512x84 gelbooru md5.png)

(2.71 KB 512x84 iqdb danbooru.png)

(2.69 KB 512x84 iqdb gelbooru.png)

(2.53 KB 512x84 rule.xxx md5.png)

Ignore the ones in >>4475 and >>4484 Here are some updated scripts, they all grab the rating tag as well as the normal tags.
>>4594 God bless! Any chance you can make one that works with sankaku?
>>4595 Unfortunately Sankaku doesn't work right now because, unless I'm reading their API documentation wrong, it is not possible to do a straight forward search for an md5. You'd have to do something like this: http://chan.sankakucomplex.com/post/index?tags=md5:eea8d884f3127c7a4024c531e4c1f23e I don't think the current parser system is able to generate an URL like that. Perhaps Hydrus_dev can look into this?
I just noticed there's a problem with the e621 script (possibly others) where the part that extracts the rating isn't always correct on some images due to the fact that the rating isn't in the usual location. The formula to extract the rating that I'm currently using is this: 1st div tag with id = stats 1st ul tag 3rd li tag 1st span tag The problem lies in the "3rd li tag" part, as the rating is not always the third thing to be displayed under the "Statistics" header, it depends on whether "source" is displayed or not, etc. The tag itself looks like this: <li>Rating: <span class='redtext '>Explicit</span></li> This could be solved if there was a way to check for the "Rating:" part within the <li> tag. I don't see a way to solve this currently as the only thing you can do is to check for keys within the <li> tag itself. Hydrus_dev, please add :)
>>4598 I've been fiddling with different configurations in the script but I didn't get anywhere. I wrote the script in a way that's the same as that url but Hydrus keeps adding the equal sign after the colon. Maybe if someone can write the script for the visually similar page that sankaku offers, I don't know, maybe that will work. https://chan.sankakucomplex.com/post/similar?
(2.60 KB 512x84 iqdb sankaku.png)

>>4603 Yeah, the problem is Hydrus wants to use the tag=value format for everything but here it needs to be tag:value instead. Meanwhile, here's a iqdb script for sankaku, similar to what you suggested.
>>4612 Thank you for your contribution. This was way over my head.
>>4600 >here's a script >no, no there isnt >but it has replies, wtf >Services > Parsing Scripts > Import >import from image what the fuck is this program
>>4615 The best image manager in existence.
(2.53 KB 512x84 sankaku md5.png)

>>4598 >>4595 Here's a working sankaku script.
>>4618 Thank you! "/post/show" isn't listed in their API documentation for some reason…
>>4598 What a pain! I'll make a note of that and figure out a solution. Even if another way was found in >>4618 , I'm sure something like this will come up again. An ugly fix to this, btw, I think would be to set the 'file identifier type' as 'custom input' and then paste the md5:abcd… in manually for each request. This would obviously be a pain, but it would work well for a one-shot. >>4600 I think I just replied to you in the main release thread. Thank you for this report, I will add a string contents test to the tag search. >>4615 >>4617 :^)
>>4615 Never played Artificial Academy or 3D Custom Girl? You can do amazing things with computers.
is there no imgur parsing script?
>>4887 For what? This thread is for parsing tags from booru, you are thinking of scripts for the downloader engine which isn't available yet, as the dev hasn't started working on it.
>>4894 Is there are some way to automatic use these scripts to try tagging all imagesin DB?
>>5091 Not at the moment. But you can use HTAs in this thread to import a bunch of tag mappings from various boorus: https://8ch.net/hydrus/res/2651.html
>>5092 >Why there are no tag archive from sankaku?
Not strictly related, but to anyone looking for porn on Yahoo's new somehow even shittier soft-censored Tumblr, Boodigo has specifically a Tumblr search (also blogspot and clips4sale) This is rather better than my former plan to exhaustively search NSFW tumblrs which was "find a new blog that hasn't been flagged yet, open a billion tabs in Vivaldi manually for reblog/source blogs that look related from the username or that pop up often, and then go through and import them". The backup backup plan was "wait for Hydrus to have custom login and download engines and write an addon that downloads all reblog lists, searches through lots and lots of link lines, and rough-maps the interconnectivity of blogs to each other and the original entry point(s)". I still might try to write a connectivity mapping script for the hydrus db for all tags under a given namespace just for general usefulness, but for now I'm glad there's actually a way to search NSFW tumblrs.
I'm hoping for a future parsing script update when the parser can download files to preview a tag lookup file to compare to your file.
>>5096 I think it's because sankaku has some kind of limit on how many searches you can do so it would take months to rip all the tags from the site.
Bump, hoping Hydrus_dev will get back to this soon as currently there are several boorus that just won't work with Hydrus' current system. Some expect an url like this for example https://booru.com/post/index?tags=md5:36bd7e49bb64b91b731d3d6e2b3a807a Can't do it with the current system! To be honest, the best and most flexible way to do with would probably be to allow you to enter an URL and put tags in it that hydrus then will replace with the relevant information. Something like this: https://booru.com/post/index?tags=md5:<md5> Hydrus would then take that and replace <md5> with the actual md5 of the current image to generate the finished URL.
>>5346 Pretty sure the custom download engine will be able to do all of that and more once it's made, he had a lot of feature ideas for it.
How do I use one of these scripts?
>>6832 tamper or greasemonkey
Anyone know of a script to rip the :orig files from twitter media posts?
>>6994 Well I found a script, works in tampermonkey even though its for greasemoney. It only redirects to the :orig files but at least it saves a step. https://greasyfork.org/en/scripts/9510-twitter-image-orig-promoter/code
Here's the booru tag parser script for grease/tampermonkey with derpibooru included since somebody asked in another thread https://pastebin.com/XtpZAp5D
>>7354 Hm, looks like my previous post didnt work,well Anyways, i updated the script to work with greasemonkey 4.0 (and broke the copy sound in the process) Its only meant as a temporary amateur fix till the author updates the original https://github.com/leonpfeil/boorutagparser/blob/master/boorutagparser.user.js
Anyone have any updates on the parsing scripts for danbooru, sankaku and yandere? (I'm using those in cuddlebear92 page) The one for danbooru doesn't copy the medium category (like official art, high filesize etc), and none of them copy the rating in the right way, it's either nothing, or it gets copied as "rating:rating:" On another note, i saw the md5 for sankaku isn't listed anymore in the sharing page, i though it's because it doesn't work anymore, but i tried it and i still get tags normally
>>7790 The whole parsing system will be updated in the coming month or so, with the existing file lookup scripts automatically converted along with it. I expect the ability to parse and test all this stuff to improve breddy soon. I imagine the existing share spaces will grow with more and better parsers as well.
Booru tag parsing script isn't grabbing the full rez image from Danbooru These are all variations of the same image and they parsed correctly http://danbooru.donmai.us/posts/2813183 http://danbooru.donmai.us/posts/2824474 https://gelbooru.com/index.php?page=post&s=view&id=3820627 https://gelbooru.com/index.php?page=post&s=view&id=3820897 https://gelbooru.com/index.php?page=post&s=view&id=3836020 https://yande.re/post/show/405601 This did not parse correctly; it somehow downloaded a sample size of it. It's worth noting that Hydrus itself is unable to parse and download it, but the parsing script at least gets the sample rez http://danbooru.donmai.us/posts/2812948
Is there are way to scrape files and tags from Zerochan?
>>4475 Hitomi, Tsumino, Hentai2Read , HentaiCafe, NHentai, HBrowse and Goddess are what /a/ recommends when avoiding SadPanda
(5.80 KB 512x125 e621 pool lookup.png)

Here's an e621 pool lookup. Seems to work for me, images appear in correct order in the browser pane. I just need to find a better way of tagging page:* and title:* atm I drag the files onto Krename which outputs to /tmp/hydrus/<title>/<page>.<ext> and use the tag based on file name import option.
(4.69 KB 512x111 a.png)

Just made a realbooru one. Fucking parsers are a pain in the ass.
>>11590 Wait, I fucked up. Here's the fixed version.
>>4475 Can any custom parsers handle logins? Like the twitter gallery situation is still out of the picture and has been for a few months now. Fur Affinity and InkBunny if parsers are made but without logins will barely scrape any content as well. I know Hdev said FA gallery parser is coming but without login support it's hardly worth the work to make one imo.
>>11616 You can make your own login scripts but IMO it's not worth it, especially when the site makes heavy use of javascript or captchas. Instead, just copy the cookies from your browser session to get logged in. >network>data>review session cookies Inkbunny needs "PHPSESSID" For other sites just copy anything that has any login looking things like username, base64 or hex string values until it works.
What do I need to learn about HTML or JSON so I can make downloaders?
I'm trying to use the iqdb-tagger python script, but there is a PermissionError when it tries to write to windows temp folder. Anyone know how to fix? I tried setting the iqdb-tagger-server.exe, iqdb-tagger.exe and python.exe to run as administrator but it doesn't help. I'm on Windows 10. https://github.com/rachmadaniHaryono/iqdb_tagger
>>7394 I've been using the tag parser and server (https://github.com/JetBoom/boorutagparser) fine until recently: random place he decided to host the sound went down, breaking a lot of shit. Thought I'd leave a note for anyone having problems: Just right-click on the script to edit it, then comment out (//) anything to do with the sound or variable it's stored in. That should get it working again.
>>12763 I only use the parser, and just deleted the link to the audio file itself. Everything still works in the parser even with it there, but you get that stupid login prompt. And here I thought the boorus got hit with some new malware or something
What's the deal with it not working on derpibooru anymore?
I installed and tried using iqdb_tagger but it complains that the 'hydra-python-core' distribution was not found and is required by hydrus. What gives?
(60.41 KB 1151x616 concern.jpg)

Has Pixiv parsing stopped working for anyone else recently?
>>13042 What do you mean? There was no parser for pixiv. If you mean those extensions that let you direct load the images then those have broken for 1 year+ since pixiv keeps editing its sites
(22.17 KB 889x277 W2bsGlw.png)

>>13044 I might have found a custom set from CuddleBear92's GitHub repo (I sure as fuck didnt write them) but I had been reliably importing pixiv urls just days ago and now they error out; can't find anything. I havent looked into it too hard yet but was wondering if I'm alone
>>13045 I think it's just you; (I'm using the Hydrus default pixiv parser) I made my 32 artist subs check now and they went through with no errors. But they had already checked recently so there weren't any files to snag. No idea why it would work yesterday but not today, unless that was made before they revised their site and they just happened to leave the old code running as a fallback till now.
The built in script for using iqdb to look up tags from danbooru works for me. There are many more like it on cuddlebear92's website, but they are 2 years old and don't seem to work at all anymore. I just want something that works the same as the built in function for other sites like sancom, gelbooru, etc., but it seems I'm left high and dry. I don't understand why it doesn't work anymore either. I went through the logic of the iqdb gelbooru script, for instance, and compared it with the HTML actually sent back by the website, the logic still seems sound.
>>13042 Not just you, same happened to me on both the default hydrus parsers and the custom pixiv all-in-one set. Everything gets ignored and has been for a couple of days. All the pages come in as if I'm not logged in for whatever reason
>>13052 Hmm, I've found that the gelbooru one actually works off and on. Sometimes it oddly just returns a list with 4 crosses, instead of a list of actual tags though. Now then, what I'd really like to do is automate running file look up scripts on more than one file and automatically apply all tags to each file. There doesn't seem to be away to do this through the interface when more than one file is selected, but there has to be a way, right?
Hit the same pixiv issue just now. The login itself doesn't seem to be the issue, I reset and redid the login within Hydrus but that seems to have changed nothing.
Pixiv changed their API so the parser had to be redone. You can replace the old one with this one or wait until Wednesday as it should be in the next release. Also pixiv added captcha to login so you have to import cookies manually now. The login in hydrus won't work.
(3.58 KB 512x111 newsankakuparser.png)

The sankaku parser someone posted on this board that was supposed to remove the 2000 files limit didn't work properly for me, due to the naive way the parser fetched the next gallery page data I think, so I made a fix some while ago that works on my machine (TM). Please let me know if it works on yours, too.
>>13138 Working a treat right now. I understand a bit of html, but these parsers make no sense to me. Maybe I'll sit down and spend time to figure out how to do this myself sometime.
(3.63 KB 512x113 8kun downloader.png)

I'm not sure if this has been fixed yet, but I modified the default 8ch parsers to allow hydrus to download 8kun threads with filenames.
The JSON API for boards like gelbooru returns all the tags, as well as the path to the files, hash, source, updated time, etc. Example https://gelbooru.com/index.php?page=dapi&json=1&s=post&q=index&limit=50&tags=cat%20rating:safe&pid=2 (The tags are HTML-escaped, but I don't know about other entries) So why do the gallery downloaders scrape HTML for each page instead of using all the information obtained from a search request? If I do a search for a set of tags, the downloader has to download the HTML for every single post's page just to check for duplicates and tags. It's a lot of wasted resources/effort for both client and server. If I already have all 50 files that turn up in the linked search, in total I did 1 request instead of 51 to verify that. Similarly, if I had to download all the images, in total it was 51 requests instead of 101, with the bonus that no HTML scraping had to be performed.
I noticed gelbooru's JSON API returns tags as a single string with each tag delimited by spaces. Is there a way to split a JSON string match into multiple entries?
So I'm a newcomer to making downloaders, I made a bunch of url classes, such as for an HTML page of an album that contains many images, it redirects to an API call, which also has it's own class, I made parsers for the API response, selected which API query element corresponds to the next page (such as offset) and even added a next page URL in the parser. But no matter what I do, when I drag & drop an album's URL into Hydrus, it only downloads the first page worth of images and never goes further. Is it supposed to work like that? Do I have make something like GUG to make the continuous downloading work?
(2.71 KB 512x112 e621_updated_parser.png)

Friendly neighborhood anon here - e621 seems to have added 'lore' and 'meta' tagtypes which the default parser can't catch - this updated parser can catch them.
I previously used a modified version of saucenao's generic script to automatically(-ish) reverse image search untagged images that show up, but now that e621 has their own reverse search, I whipped up my own python script. e621's reverse search also doesn't have a cap on searches done in 30s/24hr (it does require an account tho). https://gist.github.com/corposim/b7ccb6a2c8814032ddd65db91b371dc2
(3.17 KB 512x113 babeswp_docl.png)

wasted my time on this
This might have been asked before, but is there a downloader for NicoSeiga? If not, does anybody know other tools for that?
I'm trying to get Hydrus to download from smugloli.net. I have made url classes that match the URL and created an API URL for the json, but when I try to watch a thread it instantly says "DEAD" with the log message saying there was no parser. It should work if the "4chan-style API parser" is used, but I have no clue how to make it use that.
Anyone know what the situation is with gfycat redirecting NSFW content to some sort of sister site? I guess they intend for you to browse their new site "redgifs" but following old nsfw gfycat links takes me to "gifdeliverynetwork" Anyway in short I got some sort of gfycat/redgifs downloader bundle from cuddlebear's hydrus scripts git repo but I'm not really sure what to do with them and I can't download videos straight from redgifs like I used to with gfy, anyone else in a similar spot?
With the number of artists attempting to migrate to pillowfort from twitter, I tried my hand at building something to parse pillowfort posts. It could probably still use some cleanup and correction, but figured it was worth putting out there since I've gotten it to work for me pretty well so far.
(3.63 KB 512x112 realbooru.png)

here's an updated realbooru downloader; includes a gug, post and gallery urls and a parser tags work well.
Can the nijie parser download video and manga? It doesn't look like it from what I saw, but I may have missed a step. While I'm asking, How would I automatically fetch the nijie work:# ?
(3.85 KB 512x108 agnph_all_in_one.png)

Friendly neighborhood anon here. Someone once asked for an agn.ph downloader. This is an all-in-one that should work for the site.
is there a parser for the FA Onion Archive?
anything for rule34hentai?
(14.14 KB 512x133 docl_instagram.png)

^wrong one, here is the one that works, tagging kindaaa works but location tags are busted (instagram)
>>14437 I think this has changed again, I'll give it a look but I am not good at it at all.
>>14710 Anything on this anon?
(8.35 KB 512x166 realbooru.png)

realbooru parser that functions at least
Sankaku is now hiding lolis. Is there some way to get around this?
I'm not sure if GUGs can make these, but anyone have a module for setting up Youtube subscriptions?
>>14782 They're not hiding lolis. I don't understand why I keep hearing this. Did you check the mature content option in settings and clear your account blacklist? Do you have an account in the first place?
Can someone help me understand what parsing scripts are for, and how to use them? Are they to improve the amount of tags that are found for images? Like a reverse search?


Forms
Delete
Report
Quick Reply