/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(17.45 KB 480x360 aUrULTifMPc.jpg)

Version 348 hydrus_dev 04/17/2019 (Wed) 22:48:04 Id: 57efa0 No. 12289
https://www.youtube.com/watch?v=aUrULTifMPc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v348/Hydrus.Network.348.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v348/Hydrus.Network.348.-.Windows.-.Installer.exe os x app: https://github.com/hydrusnetwork/hydrus/releases/download/v348/Hydrus.Network.348.-.OS.X.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v348/Hydrus.Network.348.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v348.tar.gz I had a good week. It is mostly small updates and fixes. all misc this week After the recent weeks' thumbnail work, I became frustrated with how laggy the client's file system could get under heavy load. This week I have written a new file access locking system that has less latency under heavy simultaneous use and is also safer for certain edge cases. Multiple import queues and thumbnail fading and regular media browsing will all interact with less lag now. The Client API has several improvements: the /add_tags/add_tags call has its 'hashes' parameter fixed and now does sibling-collapse and parent-expansion on its 'add' tags (which can you turn off if you want with the new 'add_siblings_and_parents' parameter). Also, by default the Client API service (and the client's Local Booru) no longer log their requests (you can turn this back on if you like under manage services) as this was very spammy (100MB+ logs!) and not useful for most users. And if you are an advanced user who needs CORS pre-flight for an API interface you are making, please check the new CORS checkbox also under manage services. It defaults to off. This CORS stuff is new to me, and I have made very basic responses–please let me know if it does not do enough for you. I also wrote a first version of a manager to deal with bitmaps (which are used to draw various custom things to screen) in a centralised, safer, and more efficient way. Some non-Windows clients should now be a bit more stable, and some heavy-load animations may be just slightly smoother (as discarded bmps are now recycled). Let me know if you run into any trouble with this (or if your memory use spikes unreasonably). I expect I will do a bit more work here next week. An issue with the yiff.party downloader not fetching certain post attachments (they got 'ignored' status previously) should be fixed. You will have to re-search yiff.party again to get the correct URLs (and not just retry any 'ignored' you have atm). full list * wrote some OR search help to 'getting started with tags' help page * wrote a new multi-reader, single-writer lock object for the client file manager, along with some unit tests for it. * updated the file and thumbnail access and regen and maintenance code to use the new lock. various access is now faster when available and overall safer. there is still work to do here * adjusted file import to be less aggressive about locking, which should reduce some file/thumbnail access lag during heavy imports * the thumbnail space estimate in the migrate database dialog is now more adaptive to the new more flexible thumbnail size system and specificies better that it is an estimate * the client api's /add_tags/add_tags call now collapses siblings and expands parents on an add/pend call. this can be turned off by setting the new optional parameter 'add_siblings_and_parents' to false. the help is updated regarding this and the client api version is incremented to 6 * fixed the client api's /add_tags/add_tags call for the 'hashes' parameter, which was failing to parse, and added an accidentally missing unit test to check this in future * the client local services (the booru and client api) now have two new options under their 'manage services' panel: 'support CORS', which turns on cross-orogin support (which is experimental for now, so defaults to False), and 'logs requests', which controls whether your log will be spammed with request reports (this also defaults to False), which should clear up some 100MB+ log hassle for people using the Hydrus Companion browser add-on * hydrus services now respond correctly (albeit sparsely) to OPTIONS requests, and if CORS is enabled, to CORS OPTIONS requests. there are unit tests for this that seem to work ok, but I think we'll need to verify it irl * finished a first version bitmap manager to handle all wx bitmap creation and destruction, including recycling mid-steps * updated all simple wx bitmap creation and destruction calls across the client to use the new bitmap manager, improving stability and saving some CPU * fixed some incorrect button alignment flags that were causing problems for clients set to assert check these values * added a new yiff.party file url class to the defaults that matches a new file attachment format * updated the 'url' content parser so if a parsed url is in the form 'summary_text url', as some booru source fields sometimes specify, the preceding summary text is removed, cleaning up the resultant url * silenced an old server connection lost error that was needlessly loud * silenced the client network engine from additionally log-printing SizeException errors when a downloading file (usually gif) exceeds file import options rules * improved misc window destruction code * updated supported mime list in 'getting started with files' help and website index * misc cleanup next week Next week is an 'ongoing long-term job' week. I would like to (finally) add a file search object to the duplicate filter, which will allow you to restrict a series of potential dupes to a certain tag, or only archived files, or whatever else you would like. While I am in that code, I will also see if I can do some easy duplicate system de-jank work overall.
[Expand Post] The first version of OR search is now complete. If you missed the boat, there is some official help for it now here: https://hydrusnetwork.github.io/hydrus/help/getting_started_tags.html A thread to discuss what 'big job' I might work on next is here >>12152 . I will catch up on this thread next week and make the poll alongside the v349 release thread next Wednesday.
before I go into current version of the program ill just repost the delete table thing here too. now that I know more or less what I would vote for for the next big thing, ill be paying more attention here then there. ********* >>12215 Honestly loving the new system as, while dup filtering is slow and generally i'm getting rid of the 120kb version instead of the 10mb one, its slower then what I would find optimal. lets give you an example, https://boards.4chan.org/s/thread/18749087 generally any image I get from /s/ wont have duplicates, and generally I subscribe to the 3dpd philosophy, however on this board, I pick only the threads that have an interest for me, or are really attractive, going through this thread, I would probably keep 1/3 of the images, with most being far to low quality, and even using them as art references would be a difficult task, If I could delete them and tag them with 'unattractive' or 'low quality - real' so that's what shows up, I would, and I could likely go through all my threads from real boards and get back quite a bit of space in a short order of time doing this. as for how I would want it… you may remember a while ago, I posted a 'mockup' of what it could look like, an area for inputing a reason, and several quick input buttons for saved canned common reasons all of this on the delete image dialogue, and all of it completely ignore table incase you don't want to ad a reason. If this was an option either at the send to trash stage and if nothing was added a second chance to add something at the permanent delete stage would be nice. while I love the idea of inputtable text for a delete (and in cases like this, if I delete 1 or 1000 images at once, the one input applies to all) i'm not tied to it, this just facilitates custom reasons to better explain why something is there. lets say I have 3 quick access buttons Low quality Meme Waste of hdd space and someone decides to just dump gore which I don't want, so I would I would have the ability to make a custom 'jackass dumped gore' for the reason and go on, not needing to take up quick action slots and gives further context. Lets say there is a good r34 thread on /b/ if I saw low quality or waste of hdd space, I may still be inclined to see what it was because the rest of the thread gave me some good images, but the added context of 'jackass posted gore' or 'scat' (the common shit posted to threads to bumplimit them without falling into meme) would tell me all I need to know. If you go for a drop down and select approach, I highly suggest having some quick slots in the top 5 or so because low quality will get used far more often then 'assole posted scat' or 'stix log shit' or other generic bullshit people will post to hit bump limits. TLDR ———————————————————– |send this file to trash? | |do you have a reason? | |{——-Generic text box here———} | |[1] [2] [3] [4] [5] | | | | [yes][no] | ———————————————————- With 1-5 being quick adds that paste text to the generic text box Because its so tedious to use image editing software to mock something up, here, its also the image in case posting formats it and fucks it to hell and back. this is what I consider ideal, 5 quick reasons, with a text box for a custom more 'fuck these images, seriously' reason Honestly I will take drop down with some quick reasons and no 'per image group' special reason, so long as there are a few custom slots for reasons that can cover nearly everything I need, the images I would use a custom reason for getting rid of would just have to fall into something generic. And BIG THANK YOU for this, If I know that something like this is coming soon, im capable of not downloading images for a few releases if it gets to the point of requiring a new hdd. The first steps to getting my db under control are here.
[Expand Post] ******** Really whatever method you end up going with, as long as there are 5 or so options it will be more then useable. the above one with 5 or so quick options that paste to a text box is just what I consider perfect, allowing for a specific reason if its warranted, but 5 quick options if its not. hell this method could have many quick options, as the buttons could just be numbered, or a short string of text to tell what it is. ill probably make a mock up to better show this tonight if I have the time or Im board… till then back to dup processing, it is really nice being able to do that now.
Ok going though some of the downloaders to see if the effects of dup filtering are showing in a major way yet and noticed this Got this error from this image https://i.4cdn.org/aco/1555128352927.png 'PngStream' object has no attribute 'chunk_eXIf'… (Copy note to see full error) Traceback (most recent call last): File "include\ClientImportFileSeeds.py", line 1246, in WorkOnURL self.DownloadAndImportRawFile( file_url, file_import_options, network_job_factory, network_job_presentation_context_factory, status_hook ) File "include\ClientImportFileSeeds.py", line 571, in DownloadAndImportRawFile self.Import( temp_path, file_import_options ) File "include\ClientImportFileSeeds.py", line 790, in Import ( status, hash, note ) = HG.client_controller.client_files_manager.ImportFile( file_import_job ) File "include\ClientCaches.py", line 1132, in ImportFile file_import_job.GenerateInfo() File "include\ClientImportFileSeeds.py", line 293, in GenerateInfo self._thumbnail = HydrusFileHandling.GenerateThumbnailBytes( self._temp_path, bounding_dimensions, mime, percentage_in = percentage_in ) File "include\HydrusFileHandling.py", line 81, in GenerateThumbnailBytes thumbnail_bytes = GenerateThumbnailBytesFromStaticImagePath( path, bounding_dimensions, mime ) File "include\ClientImageHandling.py", line 295, in GenerateThumbnailBytesFromStaticImagePathCV numpy_image = GenerateNumpyImage( path, mime ) File "include\ClientImageHandling.py", line 57, in GenerateNumpyImage numpy_image = GenerateNumPyImageFromPILImage( pil_image ) File "include\ClientImageHandling.py", line 131, in GenerateNumPyImageFromPILImage s = pil_image.tobytes() File "site-packages\PIL\Image.py", line 749, in tobytes File "site-packages\PIL\ImageFile.py", line 252, in load File "site-packages\PIL\PngImagePlugin.py", line 680, in load_end File "site-packages\PIL\PngImagePlugin.py", line 140, in call AttributeError: 'PngStream' object has no attribute 'chunk_eXIf' and yes, dup detecting is showing some results which is great, beyond the few gigs I have gotten back. However, I also see a potential issue. I have brought this up in the past, https://boards.4chan.org/trash/thread/22573284 the cyoa threads, I honestly like these and think they are fun to go though. however when one comes up in dup detector, I tend to skip it, and seeing a few of the pages that have files removed due to dup processing, I made the right call. Is it possible for a file removed due to dup processing to link back to the file that won? I mean if this is a no-low effort thing, this would be fantastic, it would effectively allow a thread watcher to present all the files with the best versions of said files shown if that's how you wanted to display them. but honestly if this would be a high effort endeavor, I think you likely have better things to do.
When I copy paste a URL from a booru to Hydrus. How do I make it so the tags such as "copyright" is imported to Hydrus as something else such as "parody" instead? Also how do I make it so it only imports certain unnamed tags from the booru not all? Such as a picture on danbooru/gelbooru has tagged as "blue eyes", "1girl", "fox ears", I only want to import the "blue eyes" automatically. Additionally is there an option so hydrus is able to import the "blue eyes" tag to instead "azur eyes"?
>>12292 > I only want to import the "blue eyes" automatically. Not possible as far as I know. > Additionally is there an option so hydrus is able to import the "blue eyes" tag to instead "azur eyes"? There is a tag sibling like feature for this, not sure how it's called anymore. But it does exist
Why can database maintenance run when Hydrus is in the middle of downloading stuff? I had a simple downloader running, and in the middle of it Hydrus wanted to process PTR updates, and for some reason this is really slow while a downloader is running, because it was stuck at initializing disk cache for several minutes so I ended the process in task manager. Maybe I need to change my maintenance settings (I had it at run if no browsing activity the last 5 minutes), but I don't think maintenance should be able to run while a downloader is busy, that should count as the client being not idle.
(1.38 MB 1918x1050 1555712382.webm)

>>12292 >>12297 Whitelist/Blacklist tags here.
>>12303 thanks, I want to be able to also include tags such as "copyright" and instead tag it as "parody". Danbooru and gelbooru has copyright tags but I tag my images as parody. How can I make it change? Also I want it to change "creator" to "artist"
(23.51 KB 553x376 Capture.png)

>>12304 What I mean is, in this picture I have imported the URL, but hydrus tags the copyrights as series, and artists as creator. How do I make it so hydrus changes this?
>>12303 also I have tried this, but when I use manage tag siblings on the fetch whitelisted tags, what happens is whatever I enter in the manage tag siblings won't be fetched in the whitelist for some reason. For example I whitelist "animal_ears", then I go to manage tag siblings and set it up as "animal_ears" to appear as "hearing". When I enter the URL into hydrus, it will no longer fetch "animal_ears" unless I remove it from the siblings…
>>12305 >How do I make it so hydrus changes this? A small script and the client API.
>>12290 Thanks, I responded to you in poll thread. Hope to have this in a few weeks.
>>12291 Yeah, that png looks broken. I can't load it in ACDSee or Firefox. The dupe system isn't yet clever enough to load dupes in real time in place of bad/deleted file downloads. I like the idea, but there would have to be some sort of indication that the swap had happened, maybe a new import status, idk–I think it would be easy to add confusion here. Please vote on the dupe work poll item to move this sort of thing along.
>>12303 Thanks m8, saved that webm. >>12292 >>12304 >>12305 >>12306 Unfortunately, the code powering tag siblings is a hellish ugly prototype that has been extended too often, and it can't handle complicated edge cases like this. Please vote on "Improve tag siblings/parents and tag 'censorship'" in the coming poll to improve how this whole system works and allow for things like namespace (creator: -> artist: etc…) siblings.
>>12301 Thank you for this report. I do not think the multiple jobs would have cause the disk cache slowdown. In the code, the downloader and other delayed jobs will wait indefinitely, using basically zero CPU, and using no HDD, while the PTR update stuff does its work. Was there maybe some other program using heavy disk work, like a defragger? There is also the off-chance that the db was doing some other maintenance work and the disk cache was waiting on that–some of the reporting around this stuff is bad. I will see if I can clean up the status text reports here if it is unclear. In any case, the maintenance routine does not check for download work atm. As long as you are idle, it'll barge in on any other work. I appreciate and think it reasonable that you would like to control this, so I will make a job to add a new checkbox to the maintenance options, something like 'if a file not imported within x minutes', to add to the idle tests.
>>12329 Thanks, good to know this is what you were already thinking. Just got made aware of one of my older drives, a game drive, has bad sectors, the current files are all ok, but that last bit of 100gb is a land mine area that is nearly unusable due to the errors. I have to replace the hdd, but its not a priority. the sooner that this gets in the program, the better off ill be. because I know it will be in the program, im probably going to cut back on downloading for a bit, 1hdd, is within budget, but 2 over 4tb drives is not a budgeted option.
>>12332 >Was there maybe some other program using heavy disk work, like a defragger? No. My database is on an M.2 NVMe SSD so usually stuff like disk cache initialization takes two seconds and PTR processing takes a minute at most. What happened here is that both the downloader and the disk cache initialization froze for several minutes. I don't know why, I didn't notice anything else. Either way I solved it by turning off idle maintenance so I'll just have it run on exit since it's really fast anyway. That new maintenance option would be a good idea to add still though.
>>12305 You can change this under network > downloader definitions > manage parsers First, I'd suggest cloning the existing parser before tampering with it. Look for the Danbooru page parser and go to "content parsers" and you'll see the namespaces hydrus uses such as "creator" and "series". Just change those namespaces to "artist" and "copyright", and it should add the data found from danbooru into those namespaces.
>>12339 tried it and what happens is it doesn't fetched the edited parsers at all until I change it back to normal
(31.28 KB 619x239 Capture.jpg)

>>12340 here I changed the namespace from "creator" to artist. When I enter the URL, Hydrus no longer fetches the artist at all.
>>12341 Hmm, I'm not sure why that's happening since it should be the same part of the html getting parsed.


Forms
Delete
Report
Quick Reply