https://www.youtube.com/watch?v=oWl-xtan44o
windows
zip:
https://github.com/hydrusnetwork/hydrus/releases/download/v320/Hydrus.Network.320.-.Windows.-.Extract.only.zip
exe:
https://github.com/hydrusnetwork/hydrus/releases/download/v320/Hydrus.Network.320.-.Windows.-.Installer.exe
os x
app:
https://github.com/hydrusnetwork/hydrus/releases/download/v320/Hydrus.Network.320.-.OS.X.-.App.dmg
tar.gz:
https://github.com/hydrusnetwork/hydrus/releases/download/v320/Hydrus.Network.320.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz:
https://github.com/hydrusnetwork/hydrus/releases/download/v320/Hydrus.Network.320.-.Linux.-.Executable.tar.gz
source
tar.gz:
https://github.com/hydrusnetwork/hydrus/archive/v320.tar.gz
I had a great week. The downloader overhaul is in its last act, and I've fixed and added some other neat stuff. There's also a neat hydrus-related project for advanced users to try out.
Late breaking edit: Looks like I have broken e621 queries that include the '/' character this week, like 'male/female'! Hold off on updating if you have these, or pause them and wait a week for me to fix it!
misc
I fixed an issue introduced in last week's new pipeline with new subs sometimes not parsing the first page of results properly. If you missed files you wanted in the first sync, please reset the affected subs' caches.
Due to an oversight, a mappings cache that I now take advantage of to speed up tag searches was missing an index that would speed it up even further. I've now added these indices–and your clients will spend a minute generating them on update–and most tag searches are now superfast! My IRL client was taking 1.6s to do the first step of finding 5000-file tag results, and now it does it in under 5ms! Indices!
The hyperlinks on the media viewer now use any custom browser launch path in
options->files and trash.
downloader overhaul (easy)
I have now added gallery parsers for all the default sites hydrus supports out the box. Any regular download now entirely parses in the new system. With luck, you won't notice any difference, but let me know if you get any searches that terminate early or any other problems.
I have also written the new Gallery URL Generator (GUG) objects for everything, but I have not yet plugged these in. I am now on the precipice of switching this final legacy step over to the new system. This will be a big shift that will finally allow us to have new gallery 'seachers' for all kinds of new sites. I expect to do this next week.
When I do the GUG switch, anything that is supported by default in the client should switch over silently and automatically, but if you have added any new custom boorus, a small amount of additional work will be required on your end to get them working again. I will work with the other parser-creators in the community to make this as painless as possible, and there will be instructions in next week's release post. In any case, I expect to roll out nicer downloaders for the popular desired boorus (derpibooru, FA, etc…) as part of the normal upcoming update process, along with some other new additions like artstation and hopefully twitter username lookup.
In any case, watch this space! It's almost happening!
downloader overhaul (advanced)
So, all the GUGs are in place, and the dialog now saves. If you are interested in making some of your own, check what I've done. I'm going to swap out the legacy 'gallery identifier' object with GUGs this coming week, and fingers-crossed, it will mostly all just swap out no prob. I can update existing gallery identifiers to my new GUGs, which will automatically inherit the url classes and parsers I've already got in place, but custom boorus are too complicated for me to update completely automatically. I will try to auto-generate gallery and post url parsers, but users will need GUGs and url classes to get working again. I think the best solution is if we direct medium-level users to the parser github and have them link things together manually, and then follow-up with whatever 'easy import' object I come up with to bundle downloader-capability into a single object. And as I say above, I'll also fold in the more popular downloaders into some regular updates. I am open to discuss this more if you have ideas!
Furthermore, I've extended url classes this week to allow 'default' values for path components and query parameters. If that component or parameter is missing from a given URL, it will still be recognised as the URL class, but it will gain the default value during import normalisation. e.g. The kind of URL safebooru gives your browser when you type in a query:
https://safebooru.org/index.php?page=post&s=list&tags=contrapposto
Will be automatically populated with an initialising pid=0 parameter:
https://safebooru.org/index.php?page=post&pid=0&s=list&tags=contrapposto
This helps us with several "the site gives a blank page/index value for the first page, which I can't match to a paged URL that will then increment via the url class"-kind of problems. It will particularly help when I add drag-and-drop search–we want it so a user can type in a query in their browser, check it is good, and then DnD the URL the site gave them straight into hydrus and the page stuff will all get sorted behind the scenes without them having to think about it.
[Expand Post]
I've updated a bunch of the gallery url classes this week with these new defaults, so again, if you are interested, please check them out. The Hentai Foundry ones are interesting.
I've also improved some of the logic behind download sites' 'source url' pre-import file status checking. Now, if URL X at Site A provides a Source URL Y to Site B, and the file Y is mapped to also has a URL Z that fits the same url class as X, Y is now distrusted as a source (wew). This stops false positive source url recognition when the booru gives the same 'original' source url for multiple files (including alternate/edited files). e621 has particularly had several of these issues, and I am sure several others do as well. I've been tracking this issue with several people, so if you have been hit by this, please let me if and know this change fixes anything, particularly for new files going forward, which have yet to be 'tainted' by multiple incorrect known url mappings. I'll also be adding some 'just download the damned file' checkboxes to file import options as I have previously discussed.
A user on the discord helpfully submitted some code that adds an 'import cookies.txt' button to the review session cookies panels. This could be a real neat way to effect fake logins, where you just copy your browser's cookies, so please play with this and let me know how you get on. I had mixed success getting different styles of cookies.txt to import, so I would be interested in more information, and to know which sites work great at logging in this way, and which are bad, and which cookies.txt browser add-ons are best!
a web interface to the server
I have been talking for a bit with a user who has written a web interface to the hydrus server. He is a clever dude who has done some neat work, and his project is now ready for people to try out. If you are fairly experienced in hydrus and would like to experiment with a nice-looking computer- and phone-compatible web interface to the general file/tag mapping system hydrus uses, please check this out:
https://github.com/mserajnik/hydrusrvue
https://github.com/mserajnik/hydrusrv
https://github.com/mserajnik/hydrusrv-docker
In particular, check out the live demo and screenshots here:
https://github.com/mserajnik/hydrusrvue/#demo
Let him know how you like it! I expect to write proper, easier APIs in the coming years, which will allow projects like this to do all sorts of new and neat things.
full list
- clients should now have objects for all default downloaders. everything should be prepped for the big switchover:
- wrote gallery url generators for all the default downloaders and a couple more as well
- wrote a gallery parser for deviant art–it also comes with an update to the DA url class because the meta 'next page' link on DA gallery pages is invalid wew!
- wrote a gallery parser for hentai foundry, inkbunny, rule34hentai, moebooru (konachan, sakugabooru, yande.re), artstation, newgrounds, and pixiv artist galleries (static html)
- added a gallery parser for sankaku
- the artstation post url parser no longer fetches cover images
- url classes can now support 'default' values for path components and query parameters! so, if your url might be missing a page=1 initialsation value due to user drag-and-drop, you can auto-add it in the normalisation step!
- if the entered default does not match the rules of the component or parameter, it will be cleared back to none!
- all appropriate default gallery url classes (which is most) now have these default values. all default gallery url classes will be overwritten on db update
- three test 'search initialisation' url classes that attempted to fix this problem a different way will be deleted on update, if present
- updated some other url classes
- when checking source urls during the pre-download import status check, the client will now distrust parsed source urls if the files they seem to refer to also have other urls of the same url class as the file import object being actioned (basically, this is some logic that tries to detect bad source url attribution, where multiple files on a booru (typically including alternate edits) are all source-url'd back to a single original)
- gallery page parsing now discounts parsed 'next page' urls that are the same as the page that fetched them (some gallery end-points link themselves as the next page, wew)
- json parsing formulae that are set to parse all 'list' items will now also parse all dictionary entries if faced with a dict instead!
- added new stop-gap 'stop checking' logic in subscription syncing for certain low-gallery-count edge-cases
- fixed an issue where (typically new) subscriptions were bugging out trying to figure a default stop_reason on certain page results
- fixed an unusual listctrl delete item index-tracking error that would sometimes cause exceptions on the 'try to link url stuff together' button press and maybe some other places
- thanks to a submission from user prkc on the discord, we now have 'import cookies.txt' buttons on the review sessions panels! if you are interested in 'manual' logins through browser-cookie-copying, please give this a go and let me know which kinds of cookies.txt do and do not work, and how your different site cookie-copy-login tests work in hydrus.
- the mappings cache tables now have some new indices that speed up certain kinds of tag search significantly. db update will spend a minute or two generating these indices for existing users
- advanced mode users will discover a fun new entry on the help menu
- the hyperlinks on the media viewer hover window and a couple of other places are now a custom control that uses any custom browser launch path in options->files and trash
- fixed an issue where certain canvas edge-case media clearing events could be caught incorrectly by the manage tags dialog and its subsidiary panels
- think I fixed an issue where a client left with a dialog open could sometimes run into trouble later trying to show an idle time maintenance modal popup and give a 'C++ assertion IsRunning()' exception and end up locking the client's ui
- manage parsers dialog will now autosort after an add event
- the gug panels now normalise example urls
- improved some misc service error handling
- rewrote some url parsing to stop forcing '+'->' ' in our urls' query texts
- fixed some bad error handling for matplotlib import
- misc fixes
next week
The big GUG overhaul is the main thing. The button where you select which site to download from will seem only to get some slightly different labels, but in truth a whole big pipeline behind that button needs to be shifted over to the new system. GUGs are actually pretty simple, so I hope this will only take one week, but we'll see!