https://www.youtube.com/watch?v=OqEBF2F-4z8
windows
zip:
https://github.com/hydrusnetwork/hydrus/releases/download/v325/Hydrus.Network.325.-.Windows.-.Extract.only.zip
exe:
https://github.com/hydrusnetwork/hydrus/releases/download/v325/Hydrus.Network.325.-.Windows.-.Installer.exe
os x
app:
https://github.com/hydrusnetwork/hydrus/releases/download/v325/Hydrus.Network.325.-.OS.X.-.App.dmg
tar.gz:
https://github.com/hydrusnetwork/hydrus/releases/download/v325/Hydrus.Network.325.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz:
https://github.com/hydrusnetwork/hydrus/releases/download/v325/Hydrus.Network.325.-.Linux.-.Executable.tar.gz
source
tar.gz:
https://github.com/hydrusnetwork/hydrus/archive/v325.tar.gz
I had a difficult week, but I got some great work done. Save for some final help revisions, the downloader overhaul is complete.
final downloader work
So, I managed to finish 13 of my 15 final jobs in the downloader overhaul. All that remains is a help pass for subscriptions and a better intro to gallery and watcher downloading, which I will fold into normal work over the coming weeks. This has been a longer journey than I expected, but I feel great to be done. This final work is mostly unusual stuff that got put off.
For instance, subscriptions can now run without a working popup (i.e. completely in the background)! It works just like import folders, as a per-subscription checkbox option, and still permits the final files being published to a popup button or a named page. I recommend only trying this after the initial sync has completed, just so you know the sub works ok (and isn't accidentally downloading 2,000 garbage files in the background!).
Also, subscription queries can now take an optional 'display name'. This display name will appear in lieu of the actual query text in most display contexts, like the edit sub panel or a popup message or a publishing destination. A query for pixiv_artist:93360 can be more neatly renamed to and managed as 'houtengeki', and 'xxxxxxxx' can be renamed 'family documents, DO NOT ENTER' and so on.
And subscription queries now have individual
tag import options that only support 'additional tags'. So, if you want to give a particular query a blog-related creator tag or a personal processing tag, this is now simple.
If you 'try again' on a 'deleted' file import, the client will now ask if you want to erase that deletion record first (i.e. overriding it and importing anyway)! This is obviously much quicker and simpler than having to temporarily edit the file import options to not exclude previously deleted.
Gallery and Watcher pages now have quick 'retry failed' buttons and list right-click menu entries.
advanced stuff
If you are in advanced mode, subscription edit panels now have a 'get quality info' button. If you select some queries and hit this (oh fug, I just tested it IRL and discovered it does it for all queries, not just selected, wew, I will fix this for next week), the client will do some hacky db work and present you with a summary of how many files currently in those queries are inbox/archive/deleted, and a percentage of archived/(archived+deleted)–basically "after processing, you kept 5% of this query". This should help you figure out which queries are actually 'good' for you and which are just giving you 98% trash. I can do more here, but this is just a quick prototype. Feedback would be appreciated.
The downloader easy-import pngs now support custom http headers and bandwidth rules! This is a bit experimental, so test it a bit please before you roll it out for real. If you have custom headers or specific bandwidth rules for the domains in your
export downloaders gugs, they will be added automatically, and there's a button to add them separately as well. Exporters and importers will get detailed previews of what these new 'domain metadata' objects include.
If you are in advanced mode, file import options now have options to turn off url- and hash-based 'skip because already in db/previously deleted' checks. They are basically a "I don't care what you think the url is, just download it anyway and see if it is a new file m8". If you have a particular url conflict that was causing an incorrectly skipped download that I have previously discussed with you, please try these options and reattempt the problem file. Don't use them for regular downloads and subs, or you'll just be wasting bandwidth. Advanced file import options now also allow you to turn off source url association completely.
full list
- added a 'show a popup while working' checkbox to edit subscription panel–be careful with it, I think maybe only turn it off after you are happy everything is set up right and the sub has run once
- advanced mode users will see a new 'get quality info' button on the edit subscription panel. this will some ugly+hacky inbox/archived/deleted info on the selected queries to help you figure out if you are only archiving, say, 2% of one query. this is a quickly made but cpu-expensive way of calculating this info. I can obviously expand it in future, so I would appreciate your thoughts
- subscription queries now have an optional display name, which has no bearing on their function but if set will appear instead of query text in various presentation contexts (this is useful, for instance, if the downloader query text deals in something unhelpful like integer artist_id)
- subscription queries now each have a simple tag import options! this only allows 'additional tags', in case you want to add some simple per-query tags
- selecting 'try again' on file imports that previously failed due to 'deleted' will now pop up a little yes/no asking if you would like to first erase these files' previously deleted file record!
- the watcher and gallery import panels now have 'retry failed' buttons and right-click menu entries when appropriate
- the watcher and gallery import panels will now do some ui update less frequently when they contain a lot of data
- fixed the new human-friendly tag sorting code for ungrouped lexicographic sort orders, where it was accidentally grouping by namespace
- downloader easy-import pngs can now hold custom header and bandwidth rules metadata! this info, if explicitly present for the appropriate domain, will be added automatically on the export side as you add gugs. it can also be bundled separately after manually typing a domain to add. on the import side, it is now listed as a new type. longer human-friendly descriptions of all bandwidth and header information being bundled will be displayed during the export and import processes, just as an additional check
- for advanced users, added 'do not skip downloading because of known urls/hashes' options to downloader file import options. these checkboxes work like the tag import options ones–ignoring known urls and hashes to force downloads. they are advanced and should not be used unless you have a particular problem to fix
[Expand Post]
- improved how the pre-import url/hash checking code is compared for the tag and file import options, particularly on the hash side
- for advanced users, added 'associate additional source urls' to downloader file import options, which governs whether a site's given 'source urls' should be added and trusted for downloaded files. turn this off if the site is giving bad source urls
- fixed an unusual problem where gallery searches with search terms that included the search separator (like '6+girls skirt', with a separator of '+') were being overzealously de/encoded (to '6+girls+skirt' rather than '6%2bgirls+skirt')
- improved how unicode quoted characters in URLs' query parameters, like %E5%B0%BB%E7%A5%9E%E6%A7%98 are auto-converted to something prettier when the user sees them
- the client now tests if 'already in db' results are actually backed by the file structure–now, if a the actual file is missing despite the db record, the import will be force-attempted and the file structure hopefully healed
- gallery url jobs will no longer spawn new 'next page' urls if the job yielded 0 _new_ (rather than _total_) file urls (so we should have fixed loops fetching the same x 'already in file import cache' results due to the gallery just passing the same results for n+1 page fetches)
- in the edit parsing panels, if the example data currently looks like json, new content parsers will spawn with json formulae, otherwise they will get html formulae
- fixed an issue with the default twitter tweet parser pulling the wrong month for source time
- added a simple 'media load report mode' to the help debug menu to help figure out some PIL/OpenCV load order stuff
- the 'missing locations recovery' dialog that spawns on boot if file locations are missing now uses the new listctrl, so is thankfully sortable! it also works better behind the scenes
- this dialog now also has an 'add a possibly correct location' button, which will scan the given directory for the correct prefixes and automatically fill in the list for you
- fixed some of the new import folder error reporting
- misc code cleanup
next week
Now I will finish a simple login manager. Fingers crossed, I hope to spend a total of three to four weeks on it. I don't expect I'll have anything interesting ready for it for v326, but maybe I'll have some dummy ui for advanced users to play with.
Thanks everyone!