/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.03 KB 480x360 tTHeQu3S98E.jpg)

Version 347 hydrus_dev 04/10/2019 (Wed) 23:13:09 Id: 0028ea No. 12153
https://www.youtube.com/watch?v=tTHeQu3S98E windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v347/Hydrus.Network.347.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v347/Hydrus.Network.347.-.Windows.-.Installer.exe os x app: https://github.com/hydrusnetwork/hydrus/releases/download/v347/Hydrus.Network.347.-.OS.X.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v347/Hydrus.Network.347.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v347.tar.gz I had a good week. OR search is essentially finished, and I cleaned up and fixed a variety of other things. A new 'big thing to work on next' poll will be going up soon. If you are interested, please check out the discussion thread here: >>12152 or search As previously discussed, I have moved OR predicate construction to the standard dropdown list below the tag input. It now appears as the top result, where you can hit enter on it to submit it as-is. Also, while under construction, a 'cancel' button and 'rewind' button (to remove the most recent OR-term added) will appear on the same panel. You can also hit Esc to cancel a currently under-construction OR predicate. As a reminder, hold shift when you enter a tag to start an OR chain. Further shift+enter events will append new tags to the chain, and a bare enter will cap it off. I will write out some proper help for this. OR search is basically finished as a v1.0 now. I still have some last tidy-up jobs to do, but I am overall happy with it. the rest I built on the past weeks' thumbnail experiments and have written a two-stage thumbnail rendering system that gets thumbs on screen faster (even if they are the wrong size, so will look fuzzy), and regenerates any needed clearer versions in the background and replaces them in-place on screen over the following seconds. It is much smoother and faster than before, and it is pretty neat to see a fuzzy thumb suddenly fade into a clearer version, but I still have a little work to do here. Now, when you trash a file, a context-appropriate 'deletion reason', such as 'Deleted from Media Page.' will be saved. These statements are mostly trivial, but duplicate filter actions will specify a bit more about the duplicate processing action type. This text will be recovered in an import status window for 'deleted' status results, just as a help if you want to investigate closer (e.g. perhaps you are not sure why a particular file failed to import, but then you see the reason is you already decided you have a better duplicate version of it). Any files deleted before this system will just give "Unknown deletion reason." Adding OR search caused a couple of search flaws: bare system:rating searches were delivering since-deleted files, and some searches with combinations of OR predicates with regular tags were delivering subsets of the real results. I believe I have fixed both of these, and now many previously slow OR searches should run quite a bit faster, especially when accompanied by with non-OR predicates. I gave export folders a pass and fixed several bugs and inefficiencies, particularly for 'synchronised' folders that produce subdirectories from their filenames, which were often deleting those subdirectories. Also, and export folder or manual export event that attempts to produce a file path above the base export directory (e.g. if the generated filename begins with ..\ or ../) will now fail with some error text to explain what happened. If you use export folders a lot, particularly 'synchonised' ones, please let me know if you still get any unusual behaviour. full list - or search: - under construction OR predicates now present at the top of the regular tag results list, prepended with 'OR: ', and skipping default selection - this new OR line is enter-able, which will present it as-is, rather than adding new preds - hitting escape on a 'search' tag input box that is empty but has an under construction or predicate will cancel the or pred - hitting escape on a 'search' tag input box otherwise should more reliably kill its focus when the dropdown is a float window - improved OR search efficiency significantly with dynamic OR search triggering based on other search predicates. OR searches including negated '-tag' components should be massively faster when paired with non-OR tag or file search predicates - I believe I fixed a search issue that would sometimes return insufficient results when OR preds are mixed with certain other combinations of tags - improved reliability of some thumbnail refresh calls - cleaned up a bunch of OR handling ui code - . - the rest: - after previous weeks' experiments, wrote new double-layer thumbnail loading system–now too-small thumbs will quckly scale up fuzzily straight to screen, and then in the coming seconds, the nice regenerated full-size thumb will be made and drawn in place as ready. it presents much faster and looks better, but there is some cleanup to do here that I will tackle next week
[Expand Post]- all local file trashing events now record a context-appropriate deletion statement such as "Deleted from Media Viewer." this value is recovered in 'deleted' import status 'notes'. You will mostly see 'Unknown deletion reason.', for files deleted before this new system, but it will populate with appropriate info over time - fixed a search optimisation that was not cross-referencing with file domain, meaning for instance that bare system:rating calls were returning since-deleted files - upnp management window now uses new listctrl - cleaned up some old custom page-naming code - added a 'data' debug call to clear out all cached thumbnails and force an instant ui thumb reload - fixed the trash bmp misalignment, ha ha - removed e-hentai login script from the defaults, since this testing script is not appropriate for new users - dejanked some media viewer video transitions by cleaning up animation bar rendering and smoothing out video buffer initialisation - cleaned out some surplus subprocess wait calls that were hanging some systems on various 'open externally' calls - fixed multiple syncing problems with 'synchronise' export folders that produce files with subdirectories. subdirectory structures should now be synced correctly and empty folders deleted - export folders that collapse multiple file results to the same duplicated name should, after the next run, do less overwriting to this same name - if an export folder or the regular export dialog makes a file destination path that is above the chosen directory (e.g. if the path starts with ../ or ..\), the export job will error out with an explanation - big manual file exports _should_ be politer to the ui and cause fewer hangs - doing page tab drag and drops may have less post-drop ui jank on linux, continued feedback would be appreciated - moved 'reason' handling for all content updates to its own area, which neatens many content update data handling issues - fixed petitioning a tag via a shortcut, which had bad reason handling - fixed an issue with committing pending ipfs items that was overchecking service permissions - fixed some remaining bad wx code in the unit tests - misc file status reporting cleanup next week I'll tidy up some last OR search stuff and clear out some small jobs. I would like to reduce some lag when the client file manager has a lot of competing access (e.g. when lots of new thumbnails need to be generated), and I would also like to improve some Linux stability with some unified bitmap management.
Ok, i'm going though duplicates now, got a problem, better/worse pairs are not registering numbers, alternate pairs are, not sure about same quality, as I have just been picking one from between two when one of them pops up. also, a general 'copy all tags' and 'copy all ratings' would be helpful, especially with ratings, as the deeper I go the more often I will have to edit that setting to add more ratings to take into account.
Just found this funny, and very useful, so some asshole a number of months ago decided to spam shit images to a board, any my shotgun approach to downloading has fucked me in this case, something I need to manually deal with, they modified it to much that I think the dup finder wont pick shit up however this pair was seen as duplicates, the shit version was 888kb the good one was 400~ kb really useful in this case.
(748.25 KB 2517x2028 client_2019-04-10_21-02-19.png)

>>12163 so… 455K potential pairs to go though, this is maybe the 15th time I saw this one image, so I had to know, just had to know. are you fucking kidding me there are 11 in the trash now that not accounted for, im going to get this 137 more times, I have to ask, are my odds just that bad that I got that many of the image in a row? because of 455k potential duplicates, getting 11 of the near exact same image so far is kind of…dont know what to call it,
Just got this error RuntimeError Failed to gain raw access to bitmap data. File "include\ClientGUICanvas.py", line 5722, in EventPaint self._Redraw( dc ) File "include\ClientGUICanvas.py", line 5680, in _Redraw wx_bitmap = self._image_renderer.GetWXBitmap( self._canvas_bmp.GetSize() ) File "include\ClientRendering.py", line 145, in GetWXBitmap return wx.Bitmap.FromBufferRGBA( wx_width, wx_height, wx_data ) Not exactly sure why that one is about
>>12164 my sides
>>12182 yea gotta love the shotgun approach. that said I want to say that the dup filter is less random than I thought as I think I have dealt with every one of them now.
How would I go about making pictures a child and parent of each other? For example I got a set of images, image 1, image 2, image 3. I want to order them like this since they go together. How would I set image 1 to be a child of image 2, and image 2 to image 3 while linking it to image 1? I want them to be like a set of images that goes together and not individual ones
>>12184 That functionality does not exist yet. I think if you want this, you can vote for >Improve duplicate db storage and filter workflow (need this first before alternate files support) in the upcoming "next big thing" poll.
>>12153 > Now, when you trash a file, a context-appropriate 'deletion reason', such as 'Deleted from Media Page.' will be saved. These statements are mostly trivial, but duplicate filter actions will specify a bit more about the duplicate processing action type. This text will be recovered in an import status window for 'deleted' status results, just as a help if you want to investigate closer (e.g. perhaps you are not sure why a particular file failed to import, but then you see the reason is you already decided you have a better duplicate version of it). Any files deleted before this system will just give "Unknown deletion reason." Is one of the deletion reasons, removed via duplicate processing rules?
>>12186 alright thanks, is there a way to make hyrdrus show picture that aren't with a certain tag? For example picture 1 has a tag with "character: person 1" and picture 2 and 3 has no set tag. How do I filter out picture with a set tag so it only shows picture without that specific set tag?
>>12193 oh nvm found out using "-" will exclude a tag for me
>>12194 That is good, if by change you want to see tags with the least tags you can do a system:everything system:limit=100 and sort by amount of tags
>>12164 ICARUS HAS FOUND YOU!!!!! >ICARUS HAS FOUND YOU!!!!! >>ICARUS HAS FOUND YOU!!!!! >>>ICARUS HAS FOUND YOU!!!!! >>>>ICARUS HAS FOUND YOU!!!!! >>>>>ICARUS HAS FOUND YOU!!!!! >>>>>>ICARUS HAS FOUND YOU!!!!! >>>>>>>ICARUS HAS FOUND YOU!!!!! >>>>>>>>ICARUS HAS FOUND YOU!!!!! >>>>>>>>>ICARUS HAS FOUND YOU!!!!! >>>>>>>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>>>>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>>>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>>RUN WHILE YOU CAN!!!!!!!!!!! >>>RUN WHILE YOU CAN!!!!!!!!!!! >>RUN WHILE YOU CAN!!!!!!!!!!! >RUN WHILE YOU CAN!!!!!!!!!!! RUN WHILE YOU CAN!!!!!!!!!!!
>>12159 I assume the numbers for better/worse that you are not seeing increment are on the little panel on the duplicate filter page, right? They are staying at 0? I think I should redesign this or something, I am not sure, but that is actually correct behaviour with default settings. With default settings, setting a better/worse pair trashes the worse of the two. This removes that file from 'my files' local file domain, which that panel defaults to looking at. And since then not both sides of the pair are in the domain, their better/worse relationship doesn't count for it. If the worse files are still in your trash and you change the file domain on that page to 'all local files', it will include files in the trash, and you will see positive counts. I agree on the copy all. I made the initial version for maximum customisability, but tbh most use is far simpler.
>>12163 Yeah, the current phash similar-looking search technique is, except in extreme cases, immune to stretching or changes in HSL. I am overall really pleased with it. It can also deal with slight clipping, like 2-5% border clips and so on. It can't do flips or rotations, but I expect I will add that in the next iteration on this.
>>12164 I haven't seen an exact dupe situation this bad before, but I guess that since these images are of such extreme banter value that some anon out there was adjusting a single byte over and over (to change the hash) so he could keep posting them to 4chan maybe? Or otherwise keep desu-spamming them to the same thread that had a similar hash dupe check. Or maybe, ha ha, the different versions are an intel op that wanted to trace where certain identifier keys placed in the different versions then ended up on twitter and other big sites, to better map out which twitter users go to which places.
>>12164 >>12205 To continue, the current dupe system is unfortunately very bad at handling large groups like this. Before I can do any real extensions on the dupe or alternate system, I want to rewrite the current pair-based storage to a cleverer and more compact group-based on. If you try and action that big mess in one go, it will warn you about like 120,000 pairs. I forget if it in n-squared or n-factorial, but it is definitely n-shit for large groups. Don't try and action it that way, or you'll just clog up your db for now. No good solution atm.
>>12167 This looks like a variant of an out-of-memory error. Were you zooming way in on a big file during dupe comparisons? Unfortunately, my image viewer is not good at big zooms atm, and if you like 2000% on a 4,000x3,000px png or something, it'll hit a limit somewhere trying to make a bigass bmp for the whole thing. You may have noticed the client getting laggy during bit zooms like this. A future iteration will have a tiled rendering system that only keeps the bit on screen in memory, but for now it is still basic.
>>12164 Oh yeah, your odds of getting the same image in a row are fairly high, as my current 'get some dupes to process' job batches decisions together into various similar types. You'll likely get several pairs in a row that feature the same file as A or B.
>>12190 Yeah, it'll say the action as well. In the filter it will be something like: "Deleted in Duplicate Filter (better/worse): worse file deleted." If you do a mass action from thumbnail right-click, it says this variant: "Deleted from duplicate action on Media Page (action): one/both deleted."
Just did another clean up as hydrus was eating 6gb on launch and dups were starting to lag the program to much, its now starting sub 1gb with next to nothing changed since the last time I did this which got it down to 2.3~gb still honestly see no reason for the watchers once they 404 to eat that much space, but good to see the program is getting even better with ram management. >>12205 as far as the images themselves go, yea, they are likely changing 1 pixel and re saving it, my shotgun approach to downloading has managed to seemingly sweep up everyone ever made but there are a few times where I knew the image was never getting posted again I just straight deleted the shitposts, but this time… I thought I got rid of all of them, but apparently it's somehow finding more. >>12208 after a while I thought this too as I was getting somewhere of 455k images, it was really odd that there would be 3 or 4 images that were dups of the same image that would come in.
>>12204 I have to ask, is it possible to have a custom sup finding thing? sadly I went though these a number of weeks ago, but I was checking an import archive, and apparently what I used to download corrupted images, it's more or less the standard at some point the image below this line is no longer loaded and its solid grey lines. with images like this, would it be possible to have a 'corrupt dup' setting where you either point out where the image is good, or where the image is bad (a fill tool may deal with bad better then good making it easier to do) and have it discount that section from a dup search? my logic here is if I have a large enough archive, or the interests see me downloading similar images regularly, there is a good chance I have the un corrupt version somewhere else already and just to head this off, these are images that have gone from a 120gb maxtor, to a 1.5tb seagate and survived 3 hdd failures there, to be moved to a 4tb and finally to am 8tb hdd, and finally to a currently 3tb image archive drive. odds are one of the drives fucked with the files in some way, or it was my horrendous method of acquisition all those years ago so the issue isnt my hdd corrupting images, its more I have corrupt images that may have to much corruption to be dup found
>>12239 No, I am afraid sectioning off good and bad areas of images is way too complicated for my current system to deal with. I encourage you to delete all corrupt files.


Forms
Delete
Report
Quick Reply