/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(28.58 KB 480x360 BU1F3uXyxkA.jpg)

Version 400 Anonymous 06/11/2020 (Thu) 14:28:02 Id: b0d37b No. 14433
https://www.youtube.com/watch?v=BU1F3uXyxkA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v400/Hydrus.Network.400.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v400/Hydrus.Network.400.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v400/Hydrus.Network.400.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v400/Hydrus.Network.400.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v400.tar.gz 🎉🎉🎉 MERRY v400! 🎉🎉🎉 I had a great week of vacation, and then a great week finally getting the subscription data overhaul done. subscriptions When I first wrote subscriptions, they could only hold one simple query each. Queries have become much more complicated since then, and subscriptions can of course hold many queries at once, sometimes hundreds. The old monolithic method of storing and loading subs was creaking at the seams. This week fixes it, and subscriptions should now load and operate quickly for all normal operations. Subscriptions are now broken into pieces. Essentially, instead of one thing holding everything, they now store each query as a separate object and load and save each from your database as they are needed. The 'top' of a subscription is now always in memory and allows the manage subscriptions dialog to start instantly. Subscriptions can also boot quickly, and will cause less lag as they finish up. It all saves time and database read/write. The old '200,000 files' limit for subscriptions is gone. I wouldn't advise you make a sub with 10,000 queries just yet, but you do not have to worry about the size of any one sub too much any more. There are no significant changes to how subscriptions look or are edited. All your existing subscriptions will be converted to the new format on update. However this is a big change behind the scenes, and if you have big subs, it may take a minute or two to update your databases. Your old subscription objects will also be backed up to a new subdirectory in your db directory, just in case anything goes wrong now or in the near future. Unfortunately, as subscriptions are now more complicated, I did not have time to write new import/export system for them. The duplicate/import/export buttons on the manage subscriptions dialog are hidden for now. This took a lot of planning, prep, and work. I hope you find your subscriptions work nicer, and if you have any trouble, please let me know. new downloaders Twitter retired their old API on the 1st of June, which broke our downloader. There is unfortunately no good hydrus solution for their new API, but thanks to a user's efforts, I am rolling in a parser for nitter, a twitter wrapper, this week. It has three downloadersone for media posts, one for retweets, and one that does bothso please play with it and then move your twitter subscriptions over to it. Also fixed should be derpibooru search and the md5 hash parsing of the danbooru downloader (which speeds up some downloading). full list - subscription data overhaul: - the formerly monolithic subscription object is finally broken up into smaller pieces, reducing work and load lag and total db read/write for all actions - subscriptions work the same as before, no user input is required. they just work better now™ - depending on the size and number of your subscriptions, the db update may take a minute or two this week. a backup of your old subscription objects will be created in your db directory, under a new 'legacy_subscriptions_backup' subdirectory - the manage subscriptions dialog should now open within a second (assuming subs are not currently running). it should save just as fast, only with a little lag if you decide to make significant changes or go into many queries' logs, which are now fetched on demand inside the dialog - when subscriptions run, they similarly only have to load the query they are currently working on. boot lag is now almost nothing, and total drive read/write data for a typical sub run is massively reduced - the 'total files in a sub' limits no longer apply. you can have a sub with a thousand queries and half a million urls if you like - basic subscription data is now held in memory at all times, opening up future fast access such as client api and general UI editing of subs. more work will happen here in coming weeks - if due to hard drive fault or other unusual situations some subscription file/gallery log data is missing from the db, a running sub will note this, pause the sub, and provide a popup error for the user. the manage subscription dialog will correct it on launch by resetting the affected queries with new empty data - similarly, if you launch the manage subs dialog and there is orphaned file/gallery log data in the db, this will be noticed, with the surplus data then backed up to the database directory and deleted from the database proper - subscription queries can now handle domain and bandwidth tests for downloaders that host files/posts on a different domain to the gallery search step - if subs are running when manage subs is booted, long delays while waiting for them to pause are less likely - some subscription 'should run?' tests are improved for odd situations such as subs that have no queries or all DEAD queries
[Expand Post]- improved some error handling in merge/separate code - the 'show/copy quality info' buttons now work off the main thread, disabling the sub edit dialog while they work - updated a little of the subs help - . - boring actual code changes for subs: - wrote a query log container object to store bulky file and gallery log info - wrote a query header object to store options and cache log summary info - wrote a file cache status object to summarise important info so check timings and similar can be decided upon without needing to load a log - the new cache is now used across the program for all file import summary presentation - wrote a new subscription object to hold the new query headers and load logs as needed - updated subscription management to deal with the new subscription objects. it now also keeps them in memory all the time - wrote a fail-safe update from the old subscription objects to the new, which also saves a backup to disk, just in case of unforeseen problems in the near future - updated the subscription ui code to deal with all the new objects - updated the subscription ui to deal with asynchronous log fetching as needed - cleaned up some file import status code - moved old subscription code to a new legacy file - refactored subscription ui code to a new file - refactored and improved sub sync code - misc subscription cleanup - misc subscription ui cleanup - added type hints to multiple subscription locations - improved how missing serialisable object errors are handled at the db level - . - client api: - the client api now delivers 'is_inbox', 'is_local', 'is_trashed' for 'GET /get_files/file_metadata' - the client api's Access-Control-Allow-Headers CORS header is now '*', allowing all - client api version is now 12 - . - downloaders: - twitter retired their old api on the 1st of June, and there is unfortunately no good hydrus solution for the new one. however thanks to a user's efforts, a nice new parser for nitter, a twitter wrapper, is added in today's update. please play with itit has three downloaders, one for a user's media, one for retweets, and one for both togetherand adjust your twitter subscriptions to use the new downloader as needed. the twitter downloader is no longer included for new hydrus users - thanks to a user's submission, fixed the md5 hash fetching for default danbooru parsers - derpibooru gallery searching _should_ be fixed to use their current api - . - the rest: - when the client exits or gets a 'modal' maintenance popup window, all currently playing media windows will now pause - regrettably, due to some content merging issues that are too complicated to improve at the moment, the dupe filter will no longer show the files of processed pairs in the duplicate filter more than once per batch. you won't get a series of AB, AC, AD any more. this will return in future - the weird bug where double-clicking the topmost recent tags suggestion would actually remove the top two items _should_ be fixed. general selection-setting on this column should also be improved - middle-clicking on a parent tag in a 'write' autocomplete dropdown no longer launches a page with that invalid parent 'label' tag included–it just does the base tag. the same is true of label tags (such as 'loading…') and namespace tags - when changing 'expand parents on autocomplete' in the cog button on manage tags, the respective autocomplete now changes whether it displays parents - this is slightly complicated: a tag 'write' context (like manage tags) now presents its autocomplete tags (filtering, siblings, parents) according to the tag service of the parent panel, not the current tag service of the autocomplete. so, if you are on 'my tags' panel and switch to 'all known tags' for the a/c, you will no longer get 'all known tags' siblings and parents and so on presented if 'my tags' is not set to take them. this was sometimes causing confusion when a list showed a parent but the underlying panel did not add it on tag entry - to reduce blacklist confusion, when you launch the edit blacklist dialog from an edit tag import options panel, now only the 'blacklist' tab shows, the summary text is blacklist-specific, and the top intro message is improved. a separate 'whitelist' filter will be added in the near future to allow downloading of files only if they have certain tags - 'hard-replace siblings and parents' in _manage tags_ should now correctly remove bad siblings when they are currently pending - network->downloaders->manage downloader and url display now has a checkbox to make the media viewer top-right hover show unmatched urls - the '… elide page tab names' option now applies instantly on options dialog ok to all pages - added 'copy_bmp_or_file_if_not_bmpable' shortcut command to media set. it tries copy_bmp first, then copy_file if not a static image - fixed some edit tag filter layout to stop long intro messages making it super wide - fixed an issue where tag filters could accept non-whitespace-stripped entries and entries with uppercase characters - fixed a display typo where the 'clear orphan files' maintenance job, when set to delete orphans, was accidentally reporting (total number of thumbnails)/(number of files to delete) text in the file delete step instead of the correct (num_done/num_to_do) - clarified the 'reset repository' commands in review services - when launching an external program, the child process's environment's PATH is reset to what it was at hydrus boot (removing hydrus base dir) - when launching an external program from the frozen build, if some Qt/SSL specific PATH variables have been set to hydrus subdirectories by pyinstaller or otherwise, they are now removed. (this hopefully fixes issues launching some Qt programs as external file launchers) - added a separate requirements.txt for python 3.8, which can't handle PySide2 5.13.0 - updated help->about to deal better with missing mpv - updated windows mpv to 2020-05-31 build, api version is now 1.108 - updated windows sqlite to 3.32.2 next week Next week is a small jobs week. I'll get subscription import/export working and then hack away at my bugs list and other little things to catch up on.
(1.99 MB 310x176 1330022156975.gif)

>subs dialog used to take a minute to load up >is instant now Thanks hydev!
I'm getting an exception for e6 subscriptions with this new version. I don't know where this URL format is coming from since the site and the Hydrus parsers don't have this /post/index/page/tag format anymore.
URLClassException
Could not find a parser for https://e621.net/post/index/1/chiakiro!
Traceback (most recent call last):
File "/opt/hydrus/hydrus/client/networking/ClientNetworkingDomain.py", line 668, in _GetURLToFetchAndParser
( parser_url_class, parser_url ) = self._GetNormalisedAPIURLClassAndURL( url )
File "/opt/hydrus/hydrus/client/networking/ClientNetworkingDomain.py", line 568, in _GetNormalisedAPIURLClassAndURL
raise HydrusExceptions.URLClassException( 'Could not find a URL Class for ' + url + '!' )
hydrus.core.HydrusExceptions.URLClassException: Could not find a URL Class for https://e621.net/post/index/1/chiakiro!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/hydrus/hydrus/client/importing/ClientImportSubscriptions.py", line 1347, in Sync
self._SyncQueries( job_key )
File "/opt/hydrus/hydrus/client/importing/ClientImportSubscriptions.py", line 605, in _SyncQueries
self._SyncQuery( job_key, gug, query_header, query_log_container, status_prefix )
File "/opt/hydrus/hydrus/client/importing/ClientImportSubscriptions.py", line 687, in _SyncQuery
( login_ok, login_reason ) = query_header.GalleryLoginOK( HG.client_controller.network_engine, self._name )
File "/opt/hydrus/hydrus/client/importing/ClientImportSubscriptionQuery.py", line 372, in GalleryLoginOK
nj = self._example_gallery_seed.GetExampleNetworkJob( self._GenerateNetworkJobFactory( subscription_name ) )
File "/opt/hydrus/hydrus/client/importing/ClientImportGallerySeeds.py", line 195, in GetExampleNetworkJob
( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( self.url )
File "/opt/hydrus/hydrus/client/networking/ClientNetworkingDomain.py", line 1600, in GetURLToFetchAndParser
result = self._GetURLToFetchAndParser( url )
File "/opt/hydrus/hydrus/client/networking/ClientNetworkingDomain.py", line 672, in _GetURLToFetchAndParser
raise HydrusExceptions.URLClassException( 'Could not find a parser for ' + url + '!' + os.linesep * 2 + str( e ) )
hydrus.core.HydrusExceptions.URLClassException: Could not find a parser for https://e621.net/post/index/1/chiakiro!

Could not find a URL Class for https://e621.net/post/index/1/chiakiro!


Could not find a URL Class for https://e621.net/post/index/1/chiakiro!
When attempting to commit an upload to my file repo after having downloaded an image in the simple downloader page, it fails and I get the following: BadRequestException 400: File 422ea1c7bb060b02cd2e6dbee82826407f19d5c5d3b9e54c8d25952262088132 could not parse: 'NoneType' object has no attribute 'GetDBDir' Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 342, in run callable( *args, **kwargs ) File "hydrus\client\gui\ClientGUI.py", line 154, in THREADUploadPending service.Request( HC.POST, 'file', { 'file' : file_bytes } ) File "hydrus\client\ClientServices.py", line 986, in Request network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1429, in WaitUntilDone raise self._error_exception hydrus.core.HydrusExceptions.BadRequestException: 400: File 422ea1c7bb060b02cd2e6dbee82826407f19d5c5d3b9e54c8d25952262088132 could not parse: 'NoneType' object has no attribute 'GetDBDir'
Congratulations for the 400th release my fren. Keep up the great work! It's an awesome project.
(136.31 KB 634x354 1446992780966.png)

>400th release Congrats, hydrus dev. Keep being amazing.
>>14433 Sweet buttery jesus. This looks great. I can tag all my images to find them betterer. So can I assume this v400 is the beta version?
>>14443 >So can I assume this v400 is the beta version? No. It's updated weekly
I had an ok week, though a little short on work time. I tweaked how subs work for the new system, getting them booting a little faster, re-added import/export/duplicate for subs (including importing+conversion from the old format), and cleaned up some bugs, including recent problems with linking certain downloader components together. The release should be as normal tomorrow. I got delayed on messages this week, I'll try and catch up a bit now this evening.
>>14439 >>14435 >>14440 Thanks lads. Keep on pushing. >>14443 >>14446 Yep, I'm just a soulless madman. I typically put out a release every Wednesday, trying for by 8pm EST. Tomorrow will be v401. The code is imageboard tier, so while it can do some wacky fun stuff, it is also permanently a bit shit and always in beta. The ride never ends. If you missed the help, check it out in your release under the client's help menu or here: https://hydrusnetwork.github.io/hydrus/ It has an extensive 'getting started' section that talks about updates and the other basics.
>>14438 Damn, thank you for this report. I believe I have fixed this for tomorrow, please let me know if you have any more trouble.
>>14436 Thank you for this report. I am sorry for the trouble. There are actually three things going on here: 1 - it turns out subscriptions fail to gracefully deal with gallery url tests when the url has no current definition 2 - e621 recently changed their default gallery url format, and the new default downloader did not still support the old one 3 - the new subscription tech sometimes randomly samples a slightly older previously visited gallery url for bandwidth and login tests For 401, I have fixed 1 and also added a separate definition in to more nicely cover 2. This problem should be completely gone, please let me know if you have any more trouble.
>>14451 Thanks. v401 seems to have fixed the problem


Forms
Delete
Report
Quick Reply