/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #4 Anonymous Board volunteer 04/16/2022 (Sat) 17:14:57 No. 17601
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
https://www.youtube.com/watch?v=nShSEUBKe3o windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Linux.-.Executable.tar.gz I had a great week. Lots of different small jobs done. notes and hover windows I'm happy with last week's work making notes show in media viewers, but I introduced some little bugs while rewriting hover windows. I have now fixed the bad text colour behind the top hover, the problem where clicking on tags or greyspace was propagating up to the archive/delete and duplicate filters, the bad hover panel colour on non-default stylesheets, and some note window position and size issues. Also, for notes, you can now right-click them to collapse them in the hover window. Right-click again on the name to expand again. This is a test, really, just to see if it helps navigating files with many long notes. Double-clicking on the note tab in the edit dialog lets you rename, and a checkbox under the new options->notes now lets you choose whether the text caret starts at the beginning or end of the document when editing. Furthermore, I have updated all the icon buttons in all the hovers to no longer take focus when you click on them. They were previously stealing arrow key and space after a click (to do button-to-button form navigation), which meant you couldn't click on, say, a duplicate filter action button and then go back to arrow keys to navigate. Now you should be able to mix clicks and arrow keys without trickery. If this affects you, let me know how it goes! other highlights If you didn't like the recent 'ctrl- and shift-clicks no longer show files in the preview viewer' change, check out the new checkboxes under options->gui pages. You can make either click type focus for all files again or just files with no duration--if you don't want noisy videos being annoying while you ctrl-click. The 'advanced mode' autocomplete dropdown now has two 'OR' buttons. The left one opens a new empty OR edit dialog, the right one opens the advanced text parsing input as before. full list - fixes and improvements after last week's hover and note work: - fixed the text colour behind the top middle hover window - stopped clicks on the taglist and hover greyspace being duplicated up to the main canvas (this affected the archive/delete and duplicate filter shortcuts) - fixed the background colour of the hover windows when using non-default stylesheets - fixed the notes hover window--after having shown some notes--could then lurk in the top-left corner when it should have been hidden completely - cleaned up some old focus test logic that weas used when hovers were separate windows - rewrote how each note panel in the new hover is stored. a bunch of sizing and event handling code is less hacked - significantly improved the accuracy of the 'how high should the note window be?' calculation, so notes shouldn't spill over so much or have a bunch of greyspace below - right- or middle-clicking a note now hides its text. repeat on its name to restore. this should persist through an edit, although it won't be reflected in the background atm. let's see how it works as a simple way to quickly browse a whole stack of big notes - a new 'notes' option panel lets you choose if you want the text caret to start at the beginning or end of the document when editing - you can now double-click a note tab in 'edit notes' to rename the note. some styles may let you double-click in note greyspace to create a new note, but not all will handle this (yet) - as an experiment, all the buttons on the media viewer hover windows now do not take focus when you click them. this should let you, for instance, click a duplicate filter processing button and then use the arrow keys and space to continue to navigate. previously, clicking a button would focus it, and navigation keys would be intercepted to navigate the 'form' of the buttons on the hover window. you can still focus buttons with tab. if this affects you, let me know how this goes! - . - misc: - added checkboxes to _options->gui pages_ to control whether ctrl- and shift- selects will highlight media in the preview viewer. you can choose to only do it for files with no duration if you prefer - the 'advanced mode' tag autocomplete dropdown now has 'OR' and 'OR*' buttons. the former opens a new empty OR search predicate in the edit dialog, the latter opens the advanced text parser as before - the edit OR predicate panel now starts wider and with the text box having focus - hydrus is now more careful about deciding whether to make a png or a jpeg thumbnail. now, only thumbnails that have an alpha channel with interesting data in it are saved to png. everything else is jpeg - when uploading to a repository, the client will now slow down or speed up depending on how fast things are going. previously it would work on 100 mappings at a time with a forced 0.1s wait, now it can vary between 1-1,000 weight - just to be clean, the current files line on the file history chart now initialises at 0 on your first file import time - fixed a bug in 'if file is missing, remove record' file maintenance job. if none of the files yet scanned had any urls, it could error out since the 'missing and invalid files' directory was yet to be created
[Expand Post]- linux users who seem to have mpv support yet are set to use the native viewer will get a one-time popup note on update this week just to let them know that mpv is stable on linux now and how to give it a go - the macOS App now spits out any mpv import errors when you hit _help->about_, albeit with some different text around it - I maybe fixed the 'hold shift to not follow a dragged page' tech for some users for whom it did not work, but maybe not - thanks to a user, the new website now has a darkmode-compatible hydrus favicon - all file import options now expose their new 'destination locations' object in a new button in the UI. you can only set one destination for now ('my files', obviously), but when we have multiple local file services, you will be able to set other/multiple destinations here. if you set 'nothing', the dialog will moan at you and stop you from ok-ing it. - I have updated all import queues and other importing objects in the program to pause their file work with appropriate error messages if their file import options ever has a 'nothing' destination (this could potentially happen if future after a service deletion). there are multiple layers of checks here, including at the final database level - misc code cleanup - . - client api: - added 'create_new_file_ids' parameter to the 'file_metadata' call. this governs whether the client should make a new database entry and file_id when you ask about hashes it has never seen before. it defaults to false, which is a change on previous behaviour - added help talking about this - added a unit test to test this - added archive timestamp and hash hex sort enum definitions to the 'search_files' client api help - client api version is now 31 next week Next week is cleanup. Nothing too exciting, but I'd like to break the database code up a bit more.
Is there a way to set the media viewer to use integer scaling (I think that's what it's called) rather than fitting the view to the window, so that hydrus chooses the highest zoom where all pixels are the same size and the whole image is still visible. My understanding is that nearest neighbor is a lossless scaling algorithm when the rendered view size is a multiple of the original, otherwise you get a bunch of jagged edges from the pixels being duplicated unevenly. It looks like Hydrus only has options to use "normal zooms" (what you set manually in the options? I'm confused by this), always choosing 100% zoom, or scaling to canvas size regardless of if that's with a weird zoom level (like 181.79%) that causes nearest-neighbor to create jagged edges.
When I deleted a file in Hydrus, how sure can I be that it is COMPLETELY gone? Are there any remnants that are left behind?
>>17604 yeah all the metadata for the file (tags and urls and such) are still there. There isn't currently a way to remove that stuff.
>>17603 Yeah, under options->media, and the filetype handling list, on the edit dialog is 'only permit half and double zooms'. That locks you to 50%, 100%, 200%, 400% etc... It works ok for static gifs and some pngs, if you have a ton of pixel art, but I have never really liked it myself. Set the 'scale to the canvas size' options to 'scale to the largest regular zoom that fits', I think that'll work with the 50/100/200/400 too. Let me know if it doesn't. >>17604 >>17605 Once the file is out of your trash, it will be sent to your OS's recycle bin, unless you have set in options->files and trash to permanently delete instead. Its thumbnail is permanently deleted. In terms of the file itself, it is completely gone from hydrus and you are then left with the normal issues of deleting files permanently from a disk. If you really need to remove traces of it from the drive, you'll need a special program that repeatedly shreds your empty disk sectors. In terms of metadata, hydrus keeps all other metadata it knows about the file. Information like the file's hash (basically its name), its resolution, filesize, a perceptual hash that summarises how it looked, and tags it has, ratings you gave it, URLs it knows the file is at, and when it was deleted. It may have had some of this information before it was imported (e.g. its hash and tags on the PTR) if you sync with the public tag repository. Someone who accessed your database and knew how hydrus worked would probably be able to reconstruct that you once imported this file. There are no simple ways to tell the client 'forget everything you ever knew about this file' yet. Hydrus keeps metadata because that is useful in many situations. Deletion records, for instance, help the downloader know to not re-import something your previously deleted. That said, I am working on a system that will be able to purge file knowledge on command, and other related database-wide cleanup of now-useless definition records, but it will take time to complete. There are hundreds of tables in the database that may refer to certain definitions. If you are concerned about your privacy (and everyone should be!), I strongly recommend putting your hydrus database inside an encrypted container, like with veracrypt or ciphershed or similar software. If you are new to the topic, do some searching around on how it works and try some experiments. If you are very desperate to hide that you once had a file, I can show you a basic hack to obscure it using SQLite. Basically, if you know the file's hash, you go into your install_dir/db folder, run the sqlite3 executable, and then do this: (MAKE A BACKUP FIRST IN CASE THIS GOES WRONG) .open client.master.db update hashes set hash = x'0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef' where hash = x'06b7e099fde058f96e5575f2ecbcf53feeb036aeb0f86a99a6daf8f4ba70b799'; .exit That first hash, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", should be 64 characters of random hex. The second should be the hash of the file you want to obscure. This isn't perfect, but it is a good method if you are desperate.
I just updated to the latest version, and there seems to be a serious (well, seriously annoying, but not dangerous) bug where frames/panels register mouse clicks as being higher up when you scroll down, as if you didn't scroll down. It's happening with the main tag search box drop down menu, and also in the tag edit window where tags are displayed and you can click on them to select them. I'm on Linux.
>>17607 Sorry, yeah, I messed something up last week doing some other code cleaning. I will fix it for next week and add a test to make sure it doesn't happen again. Sorry for the trouble. I guess I don't scroll and click much when I dev or use the client IRL.
>>17607 >on Linux I confirm that.
>>17607 I've got this problem on windows as well. Also, am I the only one experiencing extremely slow PTR uploads? Now instead of uploading 100 every 0.1 seconds, it is more like 1-4 every 0.1s
>>17610 i'm also getting this error when uploading to the PTR v481, win32, frozen StreamTimeoutException Connection successful, but reading response timed out! Traceback (most recent call last): File "urllib3\connectionpool.py", line 426, in _make_request File "<string>", line 3, in raise_from File "urllib3\connectionpool.py", line 421, in _make_request File "http\client.py", line 1344, in getresponse File "http\client.py", line 307, in begin File "http\client.py", line 268, in _read_status File "socket.py", line 669, in readinto File "urllib3\contrib\pyopenssl.py", line 326, in recv_into socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "requests\adapters.py", line 439, in send File "urllib3\connectionpool.py", line 726, in urlopen File "urllib3\util\retry.py", line 410, in increment File "urllib3\packages\six.py", line 735, in reraise File "urllib3\connectionpool.py", line 670, in urlopen File "urllib3\connectionpool.py", line 428, in _make_request File "urllib3\connectionpool.py", line 335, in _raise_timeout urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\networking\ClientNetworkingJobs.py", line 1460, in Start response = self._SendRequestAndGetResponse() File "hydrus\client\networking\ClientNetworkingJobs.py", line 2036, in _SendRequestAndGetResponse response = NetworkJob._SendRequestAndGetResponse( self ) File "hydrus\client\networking\ClientNetworkingJobs.py", line 710, in _SendRequestAndGetResponse response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) ) File "requests\sessions.py", line 530, in request File "requests\sessions.py", line 643, in send File "requests\adapters.py", line 529, in send requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run
[Expand Post] callable( *args, **kwargs ) File "hydrus\client\gui\ClientGUI.py", line 318, in THREADUploadPending service.Request( HC.POST, 'update', { 'client_to_server_update' : client_to_server_update } ) File "hydrus\client\ClientServices.py", line 1206, in Request network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1872, in WaitUntilDone raise self._error_exception File "hydrus\client\networking\ClientNetworkingJobs.py", line 1643, in Start raise HydrusExceptions.StreamTimeoutException( 'Connection successful, but reading response timed out!' ) hydrus.core.HydrusExceptions.StreamTimeoutException: Connection successful, but reading response timed out!
(27.33 KB 835x522 2022-04-17 150515.png)

(714.79 KB 1457x934 2022-04-17 150704.png)

(4.31 KB 1231x34 2022-04-17 150838.png)

Apologies if the answer is already somewhere on the /hydrus/ board somewhere, I hadn't been able to quite find it, yet. I'm wondering how to make hydrus be able to download pictures from 8chan (using hydrus companion) when direct access results in a 404? I was assuming some fuckery with cookies but sending the cookies from 8chan trough hydrus companion to hydrus client seemingly made no difference
>>17612 afaik there's no way to import directly from urls of "protected" boards, but I'd love to be proven wrong.
>>>/hydrus/17585 >Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing? >>>/hydrus/17586 >>notes >I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. I'm not either of these people, but a filename namespace is useless if the filename cares about case. Hydrus will just turn it all into lowercase. In those scenarios I've had to manually add the filename to the notes for each one... painful. Also, somewhat related: hydrus strips the key from mega.nz urls, so I have to manually add those to notes as well. More pain. >>17612 Have you tried giving hydrus your user-agent http header as well as the cookies?
>>17614 >Have you tried giving hydrus your user-agent http header as well as the cookies? No I haven't, however I'm still quite inexperienced when it comes to using hydrus so I don't really know how I'd be able to do that. Using the basic features of hydrus companion is pretty much as far as my skillset goes atm. Would you please kindly explain how I might do what you had described?
Trying to add page tags to my imported files is turning out to be an even bigger headache than I expected. The page namespace doesn't specify what it is a page of, so you can end up with multiple contradictory page tags. For example, an artist uploads sets of 1-3 images frequently to his preferred site, but posts larger bundles less frequently to another site. Or he posts a few pages at a time of a manga in progress, and when it's finished he aggregates all the pages in a single post for our convenience. Either way, you can end up with images that have two different page tags, both of which are technically correct for a given context, but the tags themselves don't contain enough information to tell which context they're correct in. If I wanted to be really thorough, I could make a separate namespace for each context a page can exist in, but then I'd be creating an even bigger headache for myself whenever I want to sort by pages. The best I can imagine would be some kind of nested tag system, so you can specify the tags "work:X" and "page:Y(of work:X)", and then sort by "work-page(of work)". As an added bonus, it would make navigation a lot smoother in a lot of contexts. For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter.
>>17616 Hydrus sucks at organizing files that are meant to be a sequential series. This has been a known problem for a long time unfortunately.
>>17616 >For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter. You may use kinda nested namespaces: 1 - namespace:whatever soap opera you want (to identify the group) 2 - namespace:chapter 1 (to identify the sub-group) 3 - namespace:chapter 1 - page 01 (to identify the order) So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. Done.
>>17618 >So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. At that point you're basically navigating folders in a file explorer, just more clumsy. That's exactly what I was trying to get away from when I installed hydrus.
I had a great week of simple work. I fixed some bugs--including the scrolled taglist selection issue--and improved some quality of life. The release should be as normal tomorrow.
>>17619 >At that point you're basically navigating folders in a file explorer What are you talking about? In Hydrus all files are in a centralized directory and searched with a database. I understand the hassle to tag manually, but not software is clairvoyant and reads your mind about what exactly you are searching for.
>>8813 if ordered sets are important to you installing danbooru is an option, they do put their source up on github. Last I tried it it was a pain in the ass to get working but I did eventually get it. Though it did lack a number of hydrus features I've gotten used to.
>>17616 Hydrus works off of individual files. It can adapt it to multi-file works, but the more robust of a solution you need the more you’ll butt up against Hydrus’ core design. The current idiomatic solution of generic series, title, chapter, page, etc. namespaces works for 90% of things (with another 9% of things being workable by ignoring all but one context), but if you need a many to many relationship the best you can do is probably use bespoke namespaces for each collection (e.g. “index of X:1” “index of Y:2”) and then use the custom namespace sort to view the files in whatever context you've defined. I guess an ease of use that could get added would be an entry in the tag context menu to sort by namespace. That way you wouldn't need to type it out every time.
>>17623 >That way you wouldn't need to type it out every time. In the future drag and drop tags may be the solution.
I want to remove the ptr from my database. Is there a way to use the tag migration feature to migrate tag relationships only for tags used in my files? You can do it with the actual tags, but I don't see an option to do something similar for relationships, and I'd rather not migrate over thousands of parents/children and siblings for tags I'll never see.
>>17621 You have to add multiple search terms to narrow it down to something useful, similar to how a file explorer requires you to navigate through several subdirectories to get to what you want. And for moving from chapter 1 to chapter 2, you need to remove one search term and add another. I like how hydrus allows me to pick exactly the search term I want, no matter how broad or narrow, and with the right tags and the right namespace sorting rules, sorts everything in view into logical sets and logical sequences within those sets. Maybe I should give a more concrete example of how I manage my stuff. Say an artist uploads both to pixiv and pixiv fanbox. For both services, a post often contains several images in a specific sequence. So I subscribe to both and set the downloader to tag images with the numerical id of the post the image was pulled from (namespace "post id:"), the image's index within all the images in the post (namespace page:), and the service it was pulled from (namespace site:). Then I just have to search for the artist and set namespace sorting to "site-post id-page", and everything works great. But then the artist uploads the same image to both services, and suddenly I have an image with two post id tags and two page tags. Quickest solution would be to have one version of each namespace for each site, then my sorting rule would look like "site-fanbox post id-pixiv post id-fanbox page-pixiv page". Looks ugly, but it does the job. If I only ever downloaded from those two services, I could deal with it, but with all the different sites I download from, my sorting rules become a huge fucking mess. I would probably be fine with any quick hack that allows me to define unique namespaces that get treated as the same namespace for the purpose of sorting (for example, "post id(site:pixiv)" and "post id(site:fanbox)" are treated as if they're just "post id"). Wouldn't sort reliably in every context, but would be good enough for my purposes. However, the dream would be if (assuming sorting rule is "site-post id") it first sorts by site, and then looks for a "post id(*):" tag, where * is the site it sorted by. Unfortunately I don't know enough about databases or sorting to tell how feasible something like this would be.
>>17612 Looks like you need to send the referral URL with your request. The 8chan.moe thread downloader that comes with hydrus already takes care of that, so I assume you're trying to download individual files or something? I think the proper thing here would be for the hydrus companion to attach the thread you found the image in as the referral URL, but I'm not sure if the hydrus API even supports that at the moment. So failing that, you can give 8chan.moe files an URL class and force hydrus to use https://8chan.moe/ as the referral URL for them when no other referral URL is provided. Hopefully this won't get you banned or anything.
https://www.youtube.com/watch?v=PGEZutQ-tCM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Linux.-.Executable.tar.gz I had a great week doing cleanup and other simple work. highlights I fixed the problem where clicks on a scrolled taglist were going to the wrong location. I was cleaning up some ancient wx->Qt code hacks and it seems I rarely scroll and click when working, so I never noticed the problem. I have a new test to make sure this does not happen again. Sorry for the trouble! The URLs in the top-right hover menu are now styled better. No longer underlined, and now colourable by QSS. I have updated all the default stylesheets that come with the client (you can set these under options->style) to have some decent colours. If you have your own custom QSS, check my default to see how to set it yourself. You can now set duplicate action options to 'always archive both files', if you want to play with making the duplicate filter do some of the work of the archive/delete filter. Also, the duplicate filter now has improved image prefetch. There should be less flickering when you switch from A to B the first time and when you action a pair and move to the next. Please note that if you still get flicker for 4k images, try boosting the image cache size under options->speed and memory (I boosted the default up to 384MB this week, so you might like to give it some more too). full list - misc: - fixed the stupid taglist scrolled-click position problem--sorry! I have a new specific weekly test for this, so it shouldn't happen again (issue #1120) - I made it so middle-clicking on a tag list does a select event again - the duplicate action options now let you say to archive both files regardless of their current archive status (issue #472) - the duplicate filter is now hooked into the media prefetch system. as soon as 'A' is displayed, the 'B' file will now be queued to be loaded, so with luck you will see very little flicker on the first transition from A->B. - I updated the duplicate filter's queue to store more information and added the next pair to the new prefetch queue, so when you action a pair, the A of the next pair should also load up quickly - boosted the default sizes of the thumbnail and image caches up to 32MB and 384MB (from 25/150) and gave them nicer 'bytes quantity' widgets in the options panel - when popup windows show network jobs, they now have delayed hide. with luck, this will make subscriptions more stable in height, less flickering as jobs are loaded and unloaded - reduced the extremes of the new auto-throttled pending upload. it will now change speed slower, on less strict of a schedule, and won't go as fast or slow max - the text colour of hyperlinks across the program, most significantly in the top-right media hover window, can now be customised in QSS. I have set some ok defaults for all the QSS styles that come with the client, if you have a custom QSS, check out my default to see what you need to do. also hyperlinks are no longer underlined and you can't 'select' their text with the mouse any more (this was a weird rich-text flag) - the client api and local booru now have a checkbox in their manage services panel for 'normie-friendly welcome page', which switches the default ascii art for an alternate - fixed an issue with the hydrus server not explicitly saying it is utf-8 when rendering html - may have fixed some issues with autocomplete dropdowns getting hung up in the wrong position and not fixing themselves until parent resize event or similar - . - code cleanup: - about 80KB of code moved out of the main ClientDB.py file: - refactored all combined files display mappings cache code from the code database to a new database module - refactored all combined files storage mappings cache code from the code database to a new database module - refactored all specific storage mappings cache code from the code database to a new database module - more misc refactoring of tag count estimate, tag search, and other code down to modules - hooked up specific display mappings cache to the repair system correctly--it had been left unregistered by accident - some misc duplicate action options code cleanup - migrated some ancient pause states--repository, subscriptions, import&export folders--to the newer options structure - migrated the image and thumbnail cache sizes to the newer options structure - removed some ancient db and dialog code from the retired dumper system
[Expand Post] next week I want to catch up on some github issues and do a little more multiple local file services work.
(18.35 KB 871x737 meme collection.png)

I hope collections will be expanded upon in the future. It's very nice to be able to group together images in a page, but often I want an overview of the individual images of a group. Right now I have to right click a group and pick open->in a new page, which is awkward. Here's a quick mock-up of how I'd like it to work. Basically, show all images, but visually group them together based on the selected namespaces.
>>17627 >I assume you're trying to download individual files or something? Yes, kinda... I'm using hydrus companion's right-click -> hydrus companion -> send to hydrus I'm browsing threads which I don't want to watch but contain a few select pictures I'd still like to save I tried looking into your suggested solution but I'm still very inexperienced using hydrus and have had so far no luck setting up an url class for 8chan.moe files, I'll keep trying in the meantime, just wanted to give you an update on what I was trying to do. On an unrelated note I did some digging and found probably what it is exactly that's the problem. Please do not be fooled. I am no expert. Far from it. I was just lucky enough to know about inspect element and compared the direct and indirect links plus used some googling. I must reiterate that despite of what it may seem, I am a complete noob at this and anything related to this. I do not possess knowledge or skill necessary to understand probably 90% of instructions you might throw at me, if they're not in a step-by-step format. That's not a demand btw, just a cautionary word. I appreciate all the support that I can receive. Anyway, now with that disclaimer out of the way, here's what I found. Comparing "request headers" under the network section of inspect element of the 404 with the 304 I found 2 things of note: Referrer Policy: strict-origin-when-cross-origin and sec-fetch-site: same-origin or sec-fetch-site: none googling it allowed me some insight as to what the 8chan administration did to achieve this frustrating but unfortunately necessary situation. As far as I can tell this "sec-fetch-site" is filled out by the application (in this case chrome) to it's liking. So all hydrus would need to do is when requesting 8chan.moe files to use the "sec-fetch-site: same-origin" No idea if whatever I just explained even had any use to any of you, or if you already knew all of this already, but I thought it better to share what info I have instead of withholding it. The bane of all customer support amirite? (No pictures this time because of login cookies and other identifiable info being vehemently present)
>>17630 The png I posted contains the URL class. Just go to network > downloaders > import downloaders and drag and drop the image from >>17627
Any way to stop hydrus from running maintenance (in my case ptr processing) while it's downloading subscriptions? I think that should prevent maintenance mode from kicking in. It always happen when I start Hydrus and leave it to download subs, because i have idle at 5 minutes. The downloads slow to a crawl because ptr processing is hogging the database. I could raise the time to idle but i still want it that low once hydrus has finished downloading subs...
Is any way to export the notes, like the file and tags? Something like: File: test.jpg Tags: test.jpg.txt Notes: test.jpg.notes.txt
>>17633 I get the impression that notes are a WIP feature. Personally I'm hoping we'll get the option to make the content parser save stuff as notes soon.
(5.19 KB 402x62 ClipboardImage.png)

>>17631 Bruh
>>17635 seems like you're not on the latest version
Are there plans to add dns over https support to hydrus? Most browsers seem to have that feature now, so it'd be cool if hydrus did too.
How do I enable a web interface for my Hydrus installation, so others can use it by my external IP? I need something simple like hydrus.app, but unfortunately it refuses to work with my external IP, only accepts the localhost, even though I enabled non-local access in API and entering my external IP in browser opens the same API welcome page as with localhost. Who runs that app, anyway, where do I see support for it?
>>17610 >>17611 Thank you for these reports. I added some pending committing auto-throttling in 481 so instead of always going for 100 rows, it could go 1-1,000 depending on how fast your machine and the PTR was doing. It seems to have backfired for some people. For 482, I capped the limits at 25-500, and I increased the tolerance of the test and reduced the acceleration. It should be less spiky while still responding to a slow database or busy PTR, but I'll be interested to know what you get. As for the read timeout on the PTR, that's more odd. Maybe the PTR was super super busy when you were talking to it, but 60 seconds without a response seems extreme. This error is essentially harmless, so don't worry too much, please just try again later. Let me know if you still get it this week and in future. It may be the result of my auto-throttling, it may just have been the PTR being super busy one day, or it might be something else. If it keeps happening, I'll write a hook for 'the PTR is busy atm, try again later' or similar. >>17614 Your thoughts on filenames have similar parallel with the 'title' tag, which I was very keen on when I started hydrus but I now generally think has been a failure. Tags are good for searching, not describing. I'd like more notes import/export support, along with the recently added Client API support, so we can play with it more for richer descriptive metadata. For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. I originally added that checkbox for a Mega supporting experiment, although I don't see anything on the github here https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders so I am not sure how well that ended up going. If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. >>17625 Ah, yeah, sorry, I don't have a nice way to filter siblings or parents by files you have yet. This has come up before, I remember now, and I'd like to add it. I recommend you migrate all the siblings and parents now, and in future when a filtering operation becomes available you can do it then. Some things will still be slow, like the edit sibs/parents dialog, but actually applying siblings and parents will be super fast since you won't have all the PTR mappings to work on.
>>17629 Thanks. Yeah, this is exactly what I want to do too. I am in the midst of a long rewrite to clean up some bad decisions I made when first making the thumbnail grid, and as I go I am adding more selection and display tools. Once things are less tangled behind the scenes, I will be able to write a 'group by' system like this, both the data structure behind and the new display code needed. Unfortunately it will take time, but I agree totally. >>17632 There's no explicit way at the moment. I have generally been comfortable with both operations working at the same time, since I'm generally ok if subs run at, say, 50% speed. I designed subs to be a roughly background activity and don't mean for them to run as fast as possible. If your machine really struggles to do both at once though, maybe I can figure out a new option. I think your best shot in the meantime, since PTR processing only works in idle time but subs can run any time, is to tweak the other idle mode options. The mouse one might work, if you often watch your subs come in live, or the 'consider the system busy if CPU above' might work, as that stops PTR work from starting if x cores are busy. If you are tight on CPU time anyway, that could be a good test for other situations too. You can also just turn off idle PTR processing and control it manually with 'process now' in services->review services. I don't like suggesting this solution as it is a bit of a sledgehammer, but you might like to play with it. >>17633 >>17634 Yeah, not yet, but more import/export options will come. If you know scripting, the Client API can grab them now: https://hydrusnetwork.github.io/hydrus/client_api.html https://hydrusnetwork.github.io/hydrus/developer_api.html
>>17637 For advanced technical stuff like that, I am limited by the libraries I use. My main 'go get stuff' network library is called 'requests', a very popular python library https://docs.python-requests.org/en/latest/ although for actual work I think it uses the core urllib3 python library https://pypi.org/project/urllib3/ . So my guess is when python supports it and we upgrade to that new version of python, this will happen naturally, or it will be a flag I can set. I searched a bit, and there might be a way to hack it in using an external library, but I am not sure how well that would work. I am not a super expert in this area. Is there a way of hacking this in at the system level? Can you tell your whole OS to do DNS lookups on https with the new protocol in the same way you can override which IP to use for DNS? If this is important to you, that might be a way to get all your software to work that way. If you discover a solution, please let me know, I would be interested. Otherwise, I think your best simple solution for now is to use a decent VPN. It isn't perfect, but it'll obscure your DNS lookups to smellyfeetbooru.org and similar from your ISP.
>>17638 The various web interfaces are all under active development right now. All are in testing phases, and I am still building out the Client API, so I can't promise there are any 'nice' solutions available right now. All the Client API tools are made by users. Many hang out on the discord, if you are comfortable going there. https://discord.gg/wPHPCUZ The best place to get support otherwise is probably on the gitlab/github/whatever sites the actual projects are hosted on, if they have issue trackers and etc.. For Hydrus.app I think that's here https://github.com/floogulinc/hydrus-web I'm not sure why your external IP access isn't working. If your your friend can see the lady welcome page across the internet, they should be able to see the whole Client API and do anything else. Sometimes http vs https can be a problem here.
>>17639 >If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. Is it even possible to download mega links through hydrus? I've been using mega.py for automating mega downloads, and looking at the code for that, it seems quite a bit more complicated than just sending the right http request. https://github.com/odwyersoftware/mega.py/blob/master/src/mega/mega.py#L695 I'd love to be proven wrong, but looks to me like this is a job for an external downloader. Speaking of which, any plans to let us configure a fallback options for URLs that hydrus can't be configured to handle directly? At very least, I want to be able to save URLs for later processing.
>>17643 >Is it even possible to download mega links through hydrus? No. #fragment text is never sent to a server, so it won't work in a traditional URL. Mega use clientside javascript or their add-on to read the fragment text and convert that into navigation commands in their client. Eventually that gets converted into whatever clever streaming download system they actually have. If you want to download Mega links, I recommend megatools or jdownloader. Just copy/paste from hydrus. Or if you want to browse, click on the link in the top-right hover of hydrus's media viewer to open it up in your browser, but bear in mind that #fragment text will often not survive a normal OS call, so you'll need to set an explicit browser executable path under options->external programs. To save a mega link in hydrus, you'll basically have to set it manually with 'manage urls', although I know some users are working on downloaders and Client API tools that will associate these URLs automatically. For native hydrus support, in the future, I'd like to have an 'exe manager' that says like 'this exe is called ffmpeg, it is here, and with these commands it will convert a webm to an mp4', for all sorts of external exes, waifu2x or youtube-dl, or indeed jdownloader. Then I can write a hook for that into URL Classes or whatever and automatically send a mega URL to an external downloader and pick up the downloaded files later for import, all natively in the client. This will be some time off though, so you'll have to do it manually for now.
>>17644 My problem is that some of the galleries I subscribe to might occasionally contain external links. For example, some artists uploading censored images, but also attaching a mega or google drive link containing the uncensored versions. I can easily set up the parser to look for these URLs in the message body and pursue them, but if hydrus itself doesn't know how to handle them, they get thrown out. Would be nice if these URLs could be stored in my inbox in some way, so I can check if I want to download them manually or paste them into some other program. Even after you implement a way to send the URL to an external program (which sounds great), it would be useful to see what URLs hydrus found but didn't know what to do with, so the user can know what URL classes they need to add.
>>17639 >For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. Oh wow, I never knew what that option did. Thanks! I made url classes. Note: one of the mega url formats (which I think is an older format) has no parameters at all, it's just "https://mega.nz/#blah". So if you just give it the url "https://mega.nz/" it will match that url. Kind of weird, but not really a huge issue. >>17617 I mean, that's not really particular to hydrus. It's true for almost any booru.
Hey, After exiting the duplicate filter I was greeted with two 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' I'm running the AUR version, if you need any more info let me know.
Is it just me or are URL classes needlessly restrictive? Forcing every URL to either be a gallery, a post or a file seems to create more issues than it solves. A post on kemono.party contains a link to a google drive folder, so all I need to do is parse it as a pursuable URL and let the google drive downloader handle the rest, right? Except google drive folder URLs count as gallery URL, and you can only pursue post and file URLs. Okay, I'll parse it as a next gallery page instead. Except you can only do that from a gallery parser, not from a post parser. That leaves two solutions. One, change kemono.party posts to count as galleries so that they're allowed to direct to other gallery URLs. That fucks with URL association, since you're only allowed to set associated URLs from post URL parsers. Second, change the URL class of google drive folders so that they count as post URLs (with multiple files) so that post URL parsers are allowed to pursue them. This breaks the google drive folder parser, because it's no longer allowed to go to next gallery page. Hold on, what if I also change next gallery page to be pursuable URLs? Not intuitive at all, but it actually does seem to work so far. As far as I can tell, the only reason to set something as a gallery URL is if you want it to be able to direct to other gallery URLs, or if you want to make use of URL parameters to find the next page. But jesus christ what a headache it was to figure all of this out while navigating between the URL class manager, the parser manager, and the download page file log. I'm guessing some of these restrictions are there to prevent people from accidentally configuring a parser that requests the next page ad infinitum, but there has to be a better way. I also have a sneaking suspicion that the dev only really downloads off boorus and designed the system around that, and that features like sub-gallery pages and "post page can produce multiple files" option had to be tacked on later to support other use cases.
could the downloading black/white list be adjusted to work on matching a search, rather than just specific tags? There's a lot of kinds of posts I'd rather not download, but most of the time they aren't simple enough to be accurately described with a single tag.
I was ill for the start of the week and am short on work time. Rather than put out a slim release, I will spend tomorrow doing some more normal work and put the release off a week. 483 should be on the 4th of May. Thanks everyone! >>17647 Sorry, I messed up some duplicate logic that will trigger on certain cases where it wants to back up a pair! This is fixed in 483 along with more duplicate filter code cleanup, please hang in there.
>>17650 Get well anon.
>>17645 For now, I think your best bet is to tell the parser to add these URLs as ''url to associate (source url)'. rather than 'url to download/pursue'. It will attach these google drive or mega or whatever links to the file as a known url, and if you have a matching URL Class like in >>17646 you'll see them nicely named in the media viewer top-right hover, but it won't download them yet. In future, when we get support (or there's a Client API solution, whatever), we'll scan the database for all the URLs of the URL classes we now support and do them retroactively. >>17646 Thank you, I will add these! >>17648 I am sorry for the trouble. When I next do a network overhaul, I would like to add more tools here. You are correct that my main fear here is to stop loops or crazy big searches. I don't want a google folder that parses ten google folders that parse a pixiv artist link that then grabs 3,000 files that grabs several other external links that splay out into a handful of deviant art tag searches by accident and so on. You are also right that I build the system for boorus originally (and some gallery sites like hentai foundry or deviant art), hence the gallery/post system. Since the downloader engine is locked into this atm, everything we have done since has been working with these fundamental objects, so the more a site deviates from that model, the more shaky hydrus is with it. Maybe I can define a 'folder tree' downloader object in a big future update, something more akin to jdownloader or a torrent client resolving a magnet link, and rather than automatic download, it instead parses the tree and presents you a summary in some new UI so you can choose what to download. I am not totally sure yet though, since that would be a ton of work and meaty, usually human-triggered actions like 'download 3.2GB from this Mega' are already well handled by other software. I would also, in the next overhaul, like to unify the edit UI in general. Jumping between the different dialogs, and the general nightmare of nested dialogs when editing parsers, I'd like to clean most of it up. Also, a highly requested feature in downloaders is downloader versioning. The update system is a complete nightmare. Just a lot of work. I am not sure when it will happen. I want to finish the multiple local file services system and then do some tag repository admin/janny workflow improvements, which will probably take me into Q3 of this year. Then I'll be free to do some other 'big' work. Most likely something to do with file relationships, since that is most popular, and then I think downloader versioning is not far behind. So, while not trying to be too optimistic or pessimistic, I hope I may be seriously planning at least some of this early/mid 2023. >>17649 Not yet, but perhaps in the future. I am planning more metadata filtering tools in future, and it would be nice to unify that with other hardcoded rules we have at the moment like 'do not download a gif > 32MB'. What sort of searches are you thinking of--something with a lot of OR clauses? Or something like 'nothing of this character by this artist'? Bear in mind that while I can expand post-download filtering too, I usually only know the tags of a file when I run the tag filters. I sometimes know the filesize and filetype right as I start a download, but I can't do something like 'veto files less than 5 seconds long' and stop the download early to save you bandwidth. >>17651 Thanks m8, doing great now. Keep on pushing.
Is there an (easy) way to extract the data used to make the file history chart into a CSV? I'd like to play around with that data myself.
Is there a way to exclude downloading files from a specific Booru/Gallery site? I want to make it so that I don't download my files from Pixiv when I use the feature that looks up a file on SauceNao and IQDB and sends the link to Hydrus. For Pixiv, I don't want to download my files from there since the tags are in Japanese, and are few in number compared to other sites like Gelbooru. This should be the easiest solution to this issue, though another solution would be to have another downloader option that specifically only searches IQDB, rather than having to use Saucenao and IQDB together, since that option always prioritizes downloading from Pixiv.
Minor bug report: hovering over tags while in the viewer and scrolling with the mouse wheel causes the viewer to move through files as if you were scrolling on the image itself. May be related to the bug from a few weeks ago.
I had a good couple of weeks. There are a variety of small fixes and quality of life improvements and the first version of 'multiple local file services' is ready for advanced users to test. The release should be as normal tomorrow.
>>17656 hello mr dev I just found out about this software and from reading the docs I have only this to say: based software based dev long live power users
Hey h dev, moveing to a new os soon, along with whatever happened recently in hydrus made video more stable so I can parse it. I know I asked about this a while ago, having a progress bar permanently under the video as an option, im wondering if that ever got implemented as an option or if it's something you haven't gotten to yet? I run into quite a few 5 second gifs next to 3 minute long webm's and me hovering the mouse over them takes up a non insignificant amount of the video, at least enough that I have to move the mouse off of it just to move it back to scrub. thanks in advance for any response.
just want to confirm the solution for broken mpv from my half sloppy debian install like in this issue: https://github.com/hydrusnetwork/hydrus/issues/1130 as suggested, copying just the system libgmodule-2.0.so to Hydrus directory helps although the path may be different, because I have such files at /usr/lib/x86_64-linux-gnu/
https://www.youtube.com/watch?v=ymI1g2VjyCY windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing some regular work and getting 'multiple local file services' ready for testing. multiple local file services This is not ready for everyone yet! Advanced users only for now please. I turned multiple local file services on in debug mode last week, just to see how things were looking, and it turned out suprisingly great, no big problems. For several months now I have been doing prep work for it, and that seems to have paid off. I decided to finish the last important things and get a v1.0 out. So, it is now possible to have multiple 'my files' services in your client, and to search, import to, and migrate files between them. These services are completely blind to each other, so searching for autocomplete tags in one will not return suggestions from another. The hope is this will allow fairly good sfw/nsfw-style separations in clients and open up interesting new contained workflows. I am recommending this only for advanced users for now, and moreso only those who have been following this feature. I have not yet written up nice help for this, and some of the UI/workflow is still not user friendly, so what I would like is for people who are enthusiastic to try it out and let me know what they think. I really haven't run into any massive errors, but I won't encourage you to go crazy on a real client yet. Go nuts on a new empty test client, or experiment carefully on a real client, just in case something goes wrong, and I will keep polishing the experience. The basics are: you can now make a new 'local file domain' in manage services. file import options now lets you import to different or multiple local file domains, and thumbnail right-click lets you copy or move files between them too. The normal search page dropdown lets you jump between local services just like searching trash, and of course it now supports multiple domains if you want to do a union. The delete and undelete commands are similarly a little more powerful when you start adding new services. Check out the changelog for more specific details. Next step I think is to make it more obvious when thumbnails/files are in certain services, since at the moment you have to scan the text on the status bar, top media hover, or thumbnail menu. Maybe custom icon rules (e.g. 'when the file is in "sfw" domain, give it a flower icon'). Then general polish like shortcut integration, maybe some more search tech, and then I really want to write a nice help document for it all to introduce normal experienced users to the idea, and some 'merge these clients' tech would be great, so users who have been using two or more clients for years can finally combine them into one. the rest This is a two week release because I was ill earlier on and it cut into my work time. So, there is a mix of different small work. Updated downloaders, reworked sibling&parent help with some neat new charts, fixes and improvements to the duplicate filter, some quality of life in UI labelling and texts. Nothing super important, but some things should be a bit smoother!
full list - multiple local file services: - the multiple local file services feature is ready for advanced users to test out! it lets you have more than one 'my files' service to store things, which will give us some neat privacy and management tools in future. there is no nice help for this feature yet, and the UI is still a little non-user-friendly, so please do not try it out unless you have been following it. and, while this has worked great in all testing, I generally do not recommend it for heavy use on a real client either, just in case something does go wrong. with those caveats, hit up _manage services_ in advanced mode, and you can now add new 'local file domain' services. it is possible to search, import to, and migrate files between these and everything basically works. I need to do more UI work to make it clear what is going on (for instance, I think we'll figure out custom icons or similar to show where files are), and some more search tech, and write up proper help, and figure out easy client merging so users can combine legacy clients, but please feel free to experiment wildly on a fresh client or carefully on your existing one - if you have more than one local file service, a new 'files' or 'local services' menu on thumbnail right-click handles duplicating and moving across local services. these actions will preserve original import times (e.g. if you move from A to B and then back to A), so they should be generally non-destructive, but we may want to add some advanced tools in future. let me know how this part goes--I think we'll probably want a different status than 'deleted from A' when you just move A->B, so as not to interfere with some advanced queries, but only IRL testing will show it - if you have a 'file import options' that imports files to multiple local services but the file import is 'already in db', the file import job will now examine if and where the file is still needed and send content update calls to fill in the gaps - the advanced delete files dialog now gives a new 'delete from all and send to trash' option if the file is in multiple local file domains - the advanced delete files dialog now fully supports file repositories - cleaned up some logic on the 'remember action' option of the advanced file deletion dialog. it also supports remembering specific file domains, not just the clever commands like 'delete and leave no record'. also this dialog no longer places the 'suggested' file service at the top of the radio button list--instead it selects that 'suggested' if there is no 'remember action' initial selection applicable. the suggested file service is now also set by the underlying thumbnail grid or media canvas if it has a simple one-service location context - the normal 'non-advanced' delete files dialog now supports files that are in multiple local file services. it will show a part of the advanced dialog to let you choose where to delete from - . - misc: - thanks to user submissions, there is a bit more help docs work--for file search, and for some neat new 'mermaid' svg diagrams in siblings/parents, which are automatically generated from a markup and easy to edit - with the new easy-to-edit mermaid diagrams, I updated the unhelpful and honestly cringe examples in the siblings and parents help to reflect real world PTR data and brushed up all the text in the top sections - just a small thing--the 'pages' menu and the page picker dialog now both say 'file search' to refer to a page that searches files. previously, 'search' or 'files' was used in different places - completely rewrote the queue code behind the duplicate filter. an ancient bad idea is now replaced with something that will be easier to work with in future - you can now go 'back' in the duplicate filter even when you have only done skips so far - the 'index string' of duplicate filters, where it says 53/100, now also says the number of decisions made - fixed some small edge case bugs in duplicate filter forward/backward move logic, and fixed the recent problem with going back after certain decisions - updated the default nijie.info parser to grab video (issue #1113) - added in a user fix to the deviant art parser - added user-made Mega URL Classes. hydrus won't support Mega for a long while, but it can recognise and categorise these URLs now, presenting them in the media viewer if you want to open them externally - fixed Exif image rotation for images that also have ICC Profiles. thanks to the user who provided great test images here (issue #1124) - hitting F5 or otherwise saying 'refresh' explicitly will now turn a search page that is currently in 'searching paused' to 'searching immediately'. previously it silently did nothing - the 'current file info' in the media window's top hover and the status bar of the main window now ignores Deletion reason, and also file modified date if it is not substantially different from another timestamp already stated. this data can still be seen on the file's right-click menu's expanded info lines off the top entry. also, as a small cleanup, it now says 'modified' and 'archived' instead of 'file modified/archived', just to save some more space - like the above 'show if interesting' check for modified date, that list of file info texts now includes the actual import time if it is different than other timestamps. (for instance, if you migrate it from one service to another some time after import) - fixed a sort error notification in the edit parser dialog when you have two duplicate subsidiary parsers that both have vetoes - fixed the new media viewer note display for PyQt5 - fixed a rare frame-duration-lookup problem when loading certain gifs into the media viewer - . - boring code cleanup: - cleaned up search signalling UI code, a couple of minor bugs with 'searching immediately' sometimes not saving right should be fixed - the 'repository updates' domain now has a different service type. it is now a 'local update file domain' rather than a 'local file domain', which is just an enum change but marks it as different to the regular media domains. some code is cleaned up as a result - renamed the terms in some old media filtering code to make it more compatible with multiple local file services - brushed up some delete code to handle multiple local file services better - cleaned up more behind the scenes of the delete files dialog - refactored ClientGUIApplicationCommand to the widgets module - wrote a new ApplicationCommandProcessor Mixin class for all UI elements that process commands. it is now used across the program and will grow in responsibility in future to unify some things here - the media viewer hover windows now send their application commands through Qt signals rather than the old pubsub system - in a bunch of places across the program, renamed 'remote' to 'not local' in file status contexts--this tends to make more sense to people at out the gate - misc little syntax cleanup next week Some small misc jobs and user-friendly-isation of multiple local file services.
>>17660 sounds great, with this I will be able to have Inbox Seen to parse Parse nsfw Parse sfw Archive nsfw Archive sfw if i'm able to search across everything, I get unfiltered results, but refine it down to specific groups outside of just a rating filter that would be great
>>17660 Does copying between local file services duplicate the file in the database?
Is it just me or is there a bug preventing files from being deleted in v483? I can send them to trash but trying to "physically delete" them doesn't work. Hitting delete with files selected does nothing, neither does right clicking and hitting "delete physically now".
(3.66 KB HydrusGraph.zip)

>>17653 Not an easy way, but attached is the original code that a user made to draw something very similar in matplotlib. If you adjust this, you could pipe it to another format, or look through the SQL to see how to extract what you want manually. My code is a bit complicated and interconnected to easily extract. The main call is here-- https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDB.py#L3098 --but there's a ton of advanced bullshit there that isn't easy to understand. If you have python experience, I'd recommend you run the program from source and then pipe the result of the help->show file history call to another location, here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUI.py#L2305 I am also expecting to expand this system. It is all hacked atm, but as it gets some polish, I expect it could go on the Client API like Mr Bones recently did. Would you be ok pulling things from the Client API, like this?: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_database_mr_bones
>>17654 Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? I don't work on that, so I'm afraid I can't help there, but I have been thinking of adding a feature on the hydrus side to say 'never download this'. A bit like a tag blacklist, but instead of URL Classes, so in your case you'd say 'never download from pixiv'. I was mostly thinking of it in terms of 'this domain is broken currently' tech, but I'd expose it to the user too. However if you want to download from pixiv on other occasions this might not be helpful. >>17655 Thank you for this report! I think the scroll is ok as long as there is a scrollbar on the taglist that can move in that direction, but if the scrollbar is at the end, or there aren't enough tags to make a scrollbar, the scroll is being promoted up to the parent panel. I'll silence this. Let me know if you have any more trouble. >>17657 I'm glad you like it! Let me know if you run into any trouble, and once you have figured things out, I'd be interested to know what you found most easy and difficult to learn. The help docs and general onboarding is always out of date, and feedback from new users on that front is always helpful. >>17658 I haven't got to it yet, I'm afraid. There is a shortcut on the 'global' set that forces the scanbar to show, but this will always cover up the bottom part of the video. I have the same problem with the short gifs and moving my mouse over only to see it was 1.1s long anyway. For some stupid layout code reasons, it is actually a pain atm for me to support both the current hide/show and the animation bar hanging beneath the video. I was thinking, as a compromise, how about an option that says 'instead of hiding the scanbar, when the mouse isn't near it, just make it 3 pixels tall'. How does that sound? Then you'd always see it if you wanted, but it wouldn't take up much space. That'd better solve the problem in the meantime and give me time to fix some hellish layout code here in the background.
>>17659 Awesome, thank you. I will update the help to reference this specifically. >>17662 Yeah, I think my next step here is to make these sorts of operations easier. You can set up a 'search everything' right now by clicking 'multiple locations' in the file domain selector and then hitting every checkmark, but it should be simpler than that. ~Maybe~ even favourite domain unions, although that seems a bit over-engineered, so I'll only do it if people actually want it. Like I have 'all local files', which is all the files on your hard disk, I need one that is all your media domains in a nice union. Also want some shortcuts so people like you will be able to hit shift+n or whatever and send a file from inbox to your parse-nsfw domain super easy. As you get into this, please let me know what works well and badly for you. All the code seems generally good, just some stupid things like a logic problem when trying to open 'delete files' on trash, so now I just need to make the UI and workflow work well. >>17663 No, it only needs one copy of the file in storage. But internally, in the database, it now has two file lists. >>17664 Yes, sorry! Thank you for the report. This is just an accidental logic bug that is stopping some people from opening the dialog on trash--sorry for the trouble! I can reproduce it and will fix it. If you really want to delete from trash, the global 'clear trash' button on review services still works, and if you have the advanced file deletion dialog turned on, you can also short-circuit by hitting shift+delete to undelete and then deleting again and choosing 'permanently delete'.
First of all, thank you for all your hard work HydrusDev I have small feature request, now that we have multiple local services For the Archive/Delete filter, there should be keyboard shortcuts for "Move/Copy to Service X" as well as "Move to Trash with reason X" and "Delete Permanently with reason X" The latter two would be nice because having to bring up the delete dialog every time is kind of clunky
>>17666 >Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? Yes, it is from Hydrus Companion, I forgot that it was a separate program since I started using it at the same time that I started using Hydrus. Now that I think about it though, just avoiding Pixiv probably isn't the best solution either, since there's plenty of content that can only be found on Pixiv. If there is a way to download the English translations of the tags, then that would mostly solve the issue, since I could then use parent/sibling tagging to align them with the other tags. I don't know how doable that would be though, so for now the best solution is probably to import a sibling tag file that changes all the Japanese pixiv tags to their English tags, assuming that someone has already made this.
>>17659 I was able to get it working by copying libmpv.so.1 and libcdio.so.18 from my old installation (still available on my old drive) to the hydrus installation folder.
I entered the duplicate filter, and after a certain point it wouldn't let me make decisions any more. I'd press the "same quality duplicate" button and it just did nothing. I exited the filter, then the client popped up a bunch of "list index out of range" errors. here's the traceback for one of them: v483, linux, frozen IndexError list index out of range File "hydrus/client/gui/ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3548, in ProcessApplicationCommand self._MediaAreTheSame() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3149, in _MediaAreTheSame self._ProcessPair( HC.DUPLICATE_SAME_QUALITY ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3259, in _ProcessPair self._ShowNextPair() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3454, in _ShowNextPair self._ShowNextPair() # there are no useful decisions left in the queue, so let's reset File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3432, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ): I reentered the duplicate filter, and I got through a few more pairs before it stopped letting me continue again. It seems like it was on the same file as last time too. Could this bug have corrupted my file relationships?
>>17665 >Python script That'll help a lot, thanks! >Would you be ok pulling things from the Client API, like this? Yeah, definitely.
>>17666 a 3 pixel tall scan bar... that honestly wont be a bad option, my only concern would be the immediate visibility of it, and i'm not sure there is a good way to do that... would it be possible to have custom colors for it, both when its small and when its large? when its large that light grey with dark grey isn't a bad option, but small it would kind of be a constantly moving needle in the haystack. but if for instance, I had the background of the smaller bad be black with a marginally think red strip, I would only see that red strip move, this may not be a great option for everyone, but I could see various different colors for higher contrast being a good thing especially when its 3 pixels big. yea I think it's a great idea, it would make it readily available from the preview how long the video is and it would be so out of the way that nothing is massively covered up. if its an option would the size it is be changeable/user settable? its currently 60 pixels if my counting is right, but I could see something maybe 15 or so being something I could leave permanently visible, if it can't than it doesn't matter, but if its possible to make it an option I think this would be a fantastic middle ground till you give it a serious pass. anyway, whatever you decide on will help no matter what path it is.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
Sorry for the double post. Verification was acting up.
>>17671 This issue isn't just with the one pair now. It's happened with multiple pair when trying to go through the filter. And it's not just happening when I mark them as same quality. It also happens when I mark them as alternates. I also noticed that when this bug happens, the number in the duplicate filter (the one that's like "13/56") jumps up a bunch
I had an ok week. I fixed some bugs (including non-working trash delete, and an issue with the new duplicate filter queue auto-skipping badly), improved some quality of life, and integrated the new multi-service 'add/move file' commands into the shortcuts system and the media viewer. The release should be as normal tomorrow. >>17671 >>17677 Thank you for this report, and sorry for the trouble! Should be fixed tomorrow, please let me know if you still have any problems with it.
Are sorting/collection improvements on the to-do list? I sometimes have to manually sort video duplicates out and being able to collect by duration/resolution ratio and sort by duration and then by resolution ratio would be extremely helpful. Sorting pages by total filesize or by smallest/largest subpage could have some uses as well, but that might be too autistic for other users.
https://www.youtube.com/watch?v=OtPsKtUyGxg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Linux.-.Executable.tar.gz I had an ok week. I fixed some things, improved some quality of life, and made internal file migration a bit easier. highlights Last week's debut of multiple local file services went well! As far as I know, no one who tried it out had any big problems, and my main concerns--mostly that it needs some better migration tools and workflows and 'this file is in here' UI--proved true. So, I know what I have to do and will keep working. Multiple local file services remains for advanced users for now, but I hope to launch it properly for everyone, with nice help, next week. However, while doing this work, I did accidentally break the simple version of the 'delete files' dialog when files were in the trash--rather than say 'delete these permanently?', it just wouldn't appear. This was due to a logical oversight where it wasn't testing and counting up 'trash' status correctly. It is fixed now. Also, there was a problem with the new duplicate filter queue for users who have done a good bit of processing. A certain function that in complicated situations automatically skips some pairs was failing whenever it hit the end of a batch. This is also fixed now, thank you for the great reports on this. For multiple local file services, I updated the UI code, fixing some little bugs and improving the workflow when you have complicated situations, and I integrated the shortcuts system and the media viewer. You can now create 'add/move to service x' actions in the 'media' shortcut set, and the media viewer has the same add/move menu on right-clicks. The media viewer has several other improvements: I think I fixed that annoying bug where a fullscreen borderless view of media that exactly fits the screen would sometimes not resize when you went back to normal window mode! Also, scrolling the mouse over the taglist hover window should no longer ever cause a 'previous/next media' event. And I have implemented a 'short and simple' version of the video/audio scanbar to show (instead of completely hiding it) when your mouse is away--just a few pixels to show things 'at a glance'. Even though it covers a few pixels of video at the bottom, I liked this so much that I set it as the default for all users. If you don't like it, you can hide it again with the new setting under options->media. full list - misc: - fixed the simple delete files dialog for trashed files. due to a logical oversight, the simple version was not testing 'trashed' status and so didn't see anything to permanently delete and would immediately dump out. now it shows the option for trashed files again, and if the selection includes trash and non-trash, it shows multiple options - fixed an error in the 'show next pair' logic of the new duplicate filter queue where if it needed to auto-skip through the end of the current batch and load up the next batch (issues #1139, #1143) - a new setting on _options->media_ now lets you set the scanbar to be small and simple instead of hidden when the mouse is moved away. I liked this so much personally it is now the default for all users. try it out! - the media viewer's taglist hover window will now never send a mouse wheel event up to the media viewer canvas (so scrolling the tags won't accidentally do previous/next if you hit the end of the list scrollbar) - I think I have fixed the bug where going on the media viewer from borderless fullscreen to a regular window would not trigger a media container resize if the media perfectly fitted the ratio of the fullscreen monitor! - the system tray icon now has minimise/restore entries - to reduce confusion, when a content parser vetoes, it now prepends the file import 'note' with 'veto: ' - the 'clear service info cache' job under _database->regenerate_ is renamed to 'service info numbers' and now has a service selector so you can, let's say, regen your miscounted 'number of files in trash' count without triggering a complete recount of every single mapping on the PTR the next time you open review services - hydrus now recognises most (and maybe all) windows executables so it can discard them from imports confidently. a user discovered an interesting exe with embedded audio that ffmpeg was seeing as an mp3--this no longer occurs - the 'edit string conversion step' dialog now saves a new default (which is used on 'add' events) every time you ok it. 'append extra text' is no longer the universal default! - the 'edit tag rule' dialog in the parsing system now starts with the tag name field focused - updated 'getting started/installing' help to talk more about mpv on Linux. the 'libgmodule' problem _seems_ to have a solid fix now, which is properly written out there. thanks to the users who figured all this out and provided feedback - . - multiple local file services: - the media viewer menu now offers add/move actions just like the thumb grid - added a new shortcut action that lets you specify add/move jobs. it is available in the media shortcut set and will work in the thumbnail grid and the media viewer - add/move is now nicer in edge cases. files are filtered better to ensure only local media files end up in a job (e.g. if you were to try to move files out of the repository update domain using a shortcut), and 'add' commands from trashed files are naturally and silently converted to a pure undelete - . - boring code cleanup: - refactored the UI side of multiple local file services add/move commands. various functions to select, filter, and question the user on actions are now pulled to a separate simple module where other parts of the UI can also access them, and there is now just one isolated pipeline for file service add/move content updates. - if a 'move' job is started without a source service and multiple services could apply, the main routine will now ask the user which to use using a selector that shows how many files each choice will affect - also rewrote the add/move menu population code, fixed a couple little issues, and refactored it to a module the media viewer canvas can use
[Expand Post]- wrote a new menu builder that can place a list of items either as a single item (if the list is length 1), or make a submenu if there are more. it drives the new add/move commands and now the behind the scenes of all other service-based menu population next week Next week is a cleanup week, so I will do some boring code cleanup and see if I can write some nice introductory help for the multiple local file services system. I have four more weeks before my vacation, so I am aiming to have the big work of multiple local file services finished by then.
>>17680 >>17673 Nice, the scan bar is far more visible than I thought it may have been. I think there is the possibility that other colors may also help legibility but for me its just fine as is.
ok h dev, probably my last question for a while, I have so far parsed thought about 5000-10000 "must be pixel dups" I have yet to find one where I have ever decided 'lets keep the one with the larger file size' I have decided, at least for the function of exact dupes, i'm willing to trust programs judgement is there any automation in the program for these yet? from what I can see a few of my subscriptions are generating a hell of alot of these, and even then, I had another 50000 to go though, if there is a way to just keep the smaller file and yeet the larger with the same settings I have assigned to 'this is better' this would be amazing. I dont recall if anything has been added to hydrus yet, I would never trust this for any speculative match as I constantly get dups that require hand parsing with that, but holy shit is it mind numbing to go though pixel dups... scratch that, when I have all files, I have 325k must be pixel dupes (2 million something potential dups, so this isn't a case of the program lagging behind options)
(34.93 KB 1920x1080 help.png)

Can't seem to do anything with these files. I can't delete them, and setting a job to remove missing or invalid files doesn't touch them. They don't have URLs so I can't easily redownload them either. What do?
>>17683 Note, they do have tags, sha256sums, and file IDs, but nothing else as far as I can tell. If I manage to redownload one by searching for each file manually based off the tags it appears and can be deleted. Maybe I could do some sqlite magic and remove the records via the file IDs using the command line, but I don't know how. The weird thing is how they appear in searches. They don't show up when I search only system:everything, but do show up when searching for tags that the missing file is tagged with. I tried adding a dummy tag to all of my working files and searching with -dummy, and the missing files didn't show up. If I search some tag that matches a missing file and use -dummy, the missing files that are tagged with whatever other tag I used to search do show up. Luckily all of these files had a tag in common so I can easily make a page with all of the missing files, 498 total. I can open the tag editor for these, and adding tags works but I cannot search for tags that only exist on missing files (I tried adding a 'missing file' tag, can't search it). Nothing interesting in the logs, unless I try to access one which either gives KeyError 101 or a generic missing file popup. Hydev, if you're interested in a copy of my database folder, I could remove most of the large working files and upload a copy somewhere if you want to mess with it. I'm open to trying whatever you want me to if that's more convenient though.
Got this error when after updating. (def jumped multiple versions, not sure how much) Manually checking my files seems that all of them are fine. It's just that hydrus can't seem to make sense of it for some reason...? FYI my files are on a separate hdd and my hydrus installation is on an ssd. Neither are on the same drive as my OS
>>17668 Thanks. I agree. I figured out the move/add internal application commands for 484, so they are ready to be integrated. 'Delete with reason x' will need a bit of extra work, but I can figure it out, and then I will have a think about how to integrate it into archive/delete and what the edit UI of this sort of thing looks like. Ideally, although I doubt I will have time, it would be really nice to have multiple archive/delete filters. >>17669 Yeah, this sounds tricky. Although it is complex, I think your best bet might be to personally duplicate and then edit the redirection scripts or tag parsers involved here. You may be able to edit the hydrus pixiv parser to grab the english tags (I know we used to have this option as an alternate parser, but I guess it isn't available any more? maybe pixiv changed how this worked?), or change whatever is parsing SauceNao, although I guess that is part of Hydrus Companion. EDIT: Actually, if your only solid problem with pixiv is you don't want its japanese tags, hit up network->downloaders->manage default tag import options, scroll down to 'pixiv file page api' and 'pixiv manga_big page' and set specific defaults there that grab no tags. Any hydrus import page that has its tag import options set to 'use the current defaults' will then default to those, and not grab any tags. >>17670 Thank you! >>17672 Thanks. I'll make a job to expose this data on the Client API.
>>17673 >>17681 I'm glad. I am enjoying it too in my IRL use. I thought it would be super annoying, but after a bit of use, it just blends into my view and is almost unconsciously useful. Just FYI: The options are a ugly debug/prototype, but you can edit the scanbar colours now. Hit up install_dir/static/qss and duplicate 'default_hydrus.qss'. Then edit your duplicate so the 'qproperty' values under HydrusAnimationBar have different hex colour values. Load up the client, switch your style to your duplicated qss file, and the scanbar should change colour! If you already use a QSS style, then you'll want to copy the custom HydrusAnimationBar section to a duplicate of the QSS style file you use and edit that. >>17674 Thank you, I will investigate this. I was actually going to try exposing all the modified timestamps on the Client API and the client, not just the aggregate value, so I will do this too, and that will help to figure out what is going on here. >>17679 I would like to do this. It can sometimes be tricky, but that's ok--the main problem is I have a lot of really ugly UI code behind the scenes that I need to clean up before I can sanely extend these systems, and then when I extend them I will also have to update the UI to support more view types. It will come, but it will have to wait for several rounds of code cleaning all across the program before I dive properly back in here. Please keep reminding me. Sorting pages themselves should be easier. You can already do a-z name and num_files, so adding total_filesize should be ok to do. I'll make a job. >>17682 Thanks. There is no automation yet, but this will be the first (optional) automated module I add to the duplicate filter, and I strongly expect to have it done this year. I will make sure it is configurable so you can choose to always get rid of the larger. Ideally, this will process duplicates immediately upon detection, so the client will negotiate it and actually delete the 'worse' file as soon as file imports happen.
>>17683 >>17684 Thanks, this is odd, but it may be completely explainable. Can you check the 'file domain' (left button) of the tag autocomplete dropdown of those search pages. Does it say 'my files' or 'all known files'? Given you can re-download these, it sounds like these are previously deleted files. If you right-click on one and hit the top item so it expands out to all the info lines and timestamps, does it say something like 'deleted from my files 3 months ago' or similar? I'm actually going to write about this a bit more this week as I do the multiple local file services help, but hydrus doesn't technically care if a file is in a domain or not--as long as the client has once heard of its hash, it can add tags or ratings or urls to it. This is the core of how the PTR works. If a file is in the client, then it can draw a thumbnail, otherwise, it draws that default hydrus icon and a red border. Normally, you never see these 'non-local' files, since when you search, you are limited to the 'my files' domain, so you filter hydrus's knowledge only to the files you have on disk, but if your file domain on that search page is 'all known files' or another advanced search, then they may have been exposed. If you see these on 'my files' or 'trash' or 'all local files', then something is definitely going wrong. >>17685 I am very sorry, this error means it is extremely likely that you have had some hard drive damage and your database files (on your SSD) have been damaged. Sometimes these errors are severe (hard drive dying), but often they are trivial (just a bit of extra junk data after a rough powercut). It may be the update routine walked over a damaged area and set a flag. Your next step is to check "install_dir/db/help my db is broke.txt". This document will talk all about it and your next steps to ensure your data is safe and start recovery. Normally this error would point you to that file, but it seems to have happened at an inconvenient moment for you and the error handling isn't clever enough to figure it out. Let me know if you need any help, I'm happy to go one on one to help fix or recover from anything serious.
>>17688 Missing files anon here, it said "my files". I should have mentioned this in my first post, but I had to restore my database from a backup a while back and these first appeared then. I'm assuming they were in the database when I backed it up, but had been deleted in between making the backup and restoring it. I fucked around with file maintenance jobs and managed to fix it. It didn't work the first time because "all media files" and/or "system:everything" wasn't matching the missing files. The files did all have a tag in common that I didn't care to remove from my working files, and for some reason this tag would match the missing files when searched for. I ran the maintenance search on that tag and did the job, and now they're gone.
>>17688 >>17689 Actually, scratch that. The job was able to match the files and reported them as missing, put their sha256sums into a file in the database folder, and made them vanish from the page that had the tag searched, but refreshing it shows that they weren't actually removed and I still encounter them when searching for other tags. Not sure what to do now.
Hello. Is there a way to make sure that when scraping tags, the imaged that were deleted aren't going to be downloaded again?
Can someone help me? Since the last 3 releases Hydrus has been pretty much unusable for me. Having it open after a while it ends up on (not responding) and it can stay that way for hours or until i force close it. I asked on the discord but no one has replied me (I can't complain tho they have helped me a lot in the past) I have a pretty decent PC. R7 1700, 32GB of RAM and I have the main files on a NVM drive and the rest on a 4TB HDD. Please help, I haven't been able to use Hydrus for almost a month.
Trying to download by Pixiv bookmarks, but everytime I enter the url "https://www.pixiv.net/en/users/numbers/bookmarks/artworks" I get an error saying "The parser found nothing in the document". Only trying to grab public bookmarks and I've got Hydrus Companion setup with the API key. Not sure what I'm doing wrong, unless there's some alternate URL I'm supposed to use for bookmarks.
could you change the behavior of importing siblings from a text file so that if a pair would create a loop with siblings you already have, it just asks if you want to replace those pairs you already have that would be part of the loop with the ones from the file? The way it works now, there's no way to replace those siblings with the ones from the file except for manually going through each one yourself, but that defeats the purpose of importing from a file. This would be an exception in the case of you clicking "only add pairs, don't remove" but that's okay because the dialog window would ask you first. As it is right now, the feature is unfortunately useless for my purposes, which is a shame because I thought I finally found a solution for an issue with siblings I've been having for a while. A real bummer.
I had a good simple week. I cleaned some code, improved some quality of life, and made multiple local file services ready for all users. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=AKgjOCuW_MU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Linux.-.Executable.tar.gz I had a good week. The multiple local file services system is now ready for all users. multiple local file services I have written some proper help for this new system to talk about what it is and how to use it. The basic idea is you can now have more than one 'my files', which lets you compartmentalise things for privacy or workflow reasons. The help is here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html All users can try this out--you no longer have to be in advanced mode--but in terms of experience level, I recommend it to people who are at least comfortable with tag siblings and parents. This system is fundamentally feature complete. The outstanding immediate problems are that it file location doesn't show up in UI very well yet, the Client API should plug into it better, and it needs some en masse controls to do large file migrations and client merging. I hope to work on these in the coming weeks. If you give this a go, let me know what you think! full list - multiple local file services: - multiple local file services are now available for everyone! you no longer need to be in advanced mode to create them. all are welcome, but in terms of skill level, I most recommend it for users who are comfortable with tag siblings and parents - the tl;dr: you can now have more than one 'my files', which lets you put things in isolated locations - I wrote a proper help document on multiple local file services--what they are, how they work, my recommendations, and a bit of extra info about hydrus file search in general, right here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html - file searches in 'multiple locations' on large clients are now massively faster in almost all situations. the only place multiple location searches are still slow is whenever the duplicates system (system:file relationships) comes into play - . - misc: - in the page tab menu, you can now sort pages by total file size - the 'force system:limit for all searches' option is moved from the 'speed and memory' to 'search' panel - when files download from sites, if the raw file is served by cloudflare and has a timestamp radically different to a parsed source time, that CF timestamp is saved under a different domain rather than overwriting the original domain timestamp. this seemed to affect danbooru on about 1 in 10-20 files. note this does not change much at the moment, but when you can see and sort on individual domain modified dates, this should improve the sort - updated the 'installing' help to talk about bad install locations for the database. network locations are bad, and thanks to user reports, we now know USB drives can be bad if the database is busy when the OS goes to sleep - if a 'database is malformed' error occurs on boot, the client now recognises it and points the user to 'install_dir/db/help my db is broke.txt' for the next steps - . - boring code cleanup: - another 60KB or so of code pulled out of ClientDB.py: - created a new database module for url mappings and refactored various fetch and update routines to it - created a new database module for some rich file metadata and refactored some file filtering, history, and status testing code to it - created a new database module for file searching and moved all tag-based file searching code to it - moved several other misc methods down to database modules next week I am behind on my github bug reports and lots of other small work, so I will chip away at these. Thanks everyone!
I'm pretty new to using this but, is there a way to tag a media with a gang of niggers tag without including its parent tags?
I'm looking to use an android app (or equivalent) that lets me manage (archive/delete) my collection hosted on a computer within a local network, so say if I had no internet I could still use it. Is this a thing? Is there a program that will do this? The available apps out there are a bit confusing as to what their limitations or features are.
Is it possible to download pics from Yandex Images with Hydrus, or can someone suggest a good program that can? Thanks.
is there a setting to make it so hydrus adds filenames as tags by default, such as when importing local files?
>>17691 Isn't that the default behavior of downloaders? Make sure "exclude previously deleted files" is checked. Or are you trying to add tags to files you've already deleted without redownloading them? I don't know if you can do that. >>17697 If you want to give something a tag without including its parent tags, it sounds like that tag shouldn't have those parent tags in the first place. >>17700 Import folders can do that. You can just have a folder somewhere that you can dump files in, and you can set hydrus to periodically check it and do things like add the filename or directory as tags.
Is there a way to download tags and other things from a parser even if the parser can't find a file to download? There are a bunch of images on e621 that I downloaded a long time ago but I didn't download the tags. Since then the artist has had almost all their images taken off of e621. Even though the images have been down, the tags are still there. Example: https://e621.net/posts/1292060 The images have the e621 url in their known urls, but if I try to download the url with hydrus it just says that it can't find anything to download. Even if "force file downloading even if url recognised" is unchecked, it won't add the tags to the file already in the db. Maybe this could be a file import option. Call it "if post url recognised, ignore failure to find file" or something.
>>17688 The cloning process seems to have worked in the sense that the integrity checks now pass. However now I get this message when I boot up hydrus. Is it safe to proceed or am I in deeper shit?
>>17689 >>17690 Thank you, this is odd. It feels like your different file services have somehow become desynced, so 'my files' has a different file list to 'all local files'. Like with 'all media files' not grabbing the orphan file records. If you make sure help->advanced mode is on, and then change the file domain from 'my files' to 'all local files', do the ghost files still show up? If not, that suggests yes there is a desync here. There is a special command for this, but it is old and I don't know how well it works in the new multiple local file service era. Please make a backup before you try this, in case it goes wrong. Then give database->db maintenance->clear orphan file records a go. It should give you some info. >>17691 >>17701 Yeah, this is default. The option is under the file import options button of any downloader. Defaults for these options are under options->importing. >>17692 When you run the program, can you check your install_dir/db folder for me? Do the different temporary .db-wal files grow very large, like 800MB+? I am chasing down a bug related to this that sounds a bit like your problem. Otherwise, please bear with the lag for a bit and hit up help->debug->profiling. There is a 'what is this?' menu entry there that explains how it works. pastebin or email me the profile log and I will see what is running so slow for you. Quick things to try: 1) if you have hundreds of pages or hundreds of download queries, reduce the size of your session 2) pause tag sibling/parent background sync maintenance under tags->sibling/parent sync.
>>17693 I am not a pixiv user IRL so I can't talk too intelligently, but hydrus is only set up to parse certain URLs. Typically that is stuff like an artist's gallery homepage, like this: https://www.pixiv.net/en/users/67138065 That URL you posted, is that your favourites on Pixiv? Hydrus would have to be taught how to parse your favourites, which I don't think it does by default. The community repository has some downloaders here that look good: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Pixiv So, if you download that newer bookmarks png: https://raw.githubusercontent.com/CuddleBear92/Hydrus-Presets-and-Scripts/master/Downloaders/Pixiv/pixiv%20bookmarks%20-%202020-11-23.png And import it via network->downloaders->import downloaders (drag and drop it on Lain), maybe it will work? Sorry if I can't help more. >>17694 Sure, thank you. I'll figure out some yes/no dialogs to change the import behaviour to a sort of 'overwrite'. >>17697 >>17701 Yeah, parents are not optional. They are supposed to apply to definitional relationships, like a 'car' is always a 'vehicle'. If you really hate the parents that, say, the PTR gives, you can change what applies where under tags->manage where tag siblings and parents apply.
>>17698 They are mostly under development right now. Some are better than others. Actual 'management' is limited, mostly they do read-only search atm, but the tools will expand in future. I assume you have been here to see the list, but if not: https://hydrusnetwork.github.io/hydrus/client_api.html#browsers_and_tools_created_by_hydrus_users Hydrus Web is your best bet if you are looking for a booru-style interface. Normally you use a site to load the interface, but if you want a local network solution, you can spin up a Docker instance, if you have that support. An alternative--this sounds stupid but I know a few guys who do it to great effect--is to just run a VNC app through your tablet, maybe with a hotkey overlay set up for your hydrus shortcuts, and then just tap to go through your archive/delete filter on the cough. Since you are on a local network, you have all the bandwidth you need for smooth VNC. >>17699 Not by default, and I'm afraid I don't see a user-made downloader at the community repository here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders I am mostly confident that hydrus could be taught to download from yandex images, but actually learning how to do that takes some time. You might like to play with hydrus's 'simple downloader', maybe one of the default formulae in there can grab image links or something and get what you want. Or, if you are very nice to a hydrus user who knows how to make downloaders, you might be able to get them to make one for you. >>17702 There might be a way to bodge this, like if you grabbed the hash in the parser maybe and hydrus never noticed it was missing a direct file URL, but I think there are too many weird hurdles to overcome and it would just fail somewhere. It is a long term project of the program to have efficient hash->tag lookup maintenance, so I do plan to have official support for this some time in the future with a future iteration of the whole parsing and lookup and maintenance systems. For now, your best bet is the Client API. Grab whatever kind of hash and tag info you like in your own script, and then throw it at the Client API. https://hydrusnetwork.github.io/hydrus/client_api.html >>17703 This looks good to me! The clone has removed all the damaged data, which seems to include some tag count tables. This is good news, because all this data can be regenerated, and it even seems that I wrote some special repair code to fix it automatically. With luck, the worst damage here is the annoyance of waiting for things to fix themselves. Click ok, let it do its work, and have a browse around. There may be more warning popup windows like this. Other data may be missing (e.g. not a whole missing table, which is easy to spot, but a table now missing half its contents from the clone), but if it is all limited to client.caches.db, you are in luck, because all that can be regenerated. Let me know if you notice any whack counts or bad searches once you are working again and I can help you figure out which of the guys under database->regenerate you should run. (NOTE: do not run any of those unless you know you need them, some of them take ages).
>>17704 It seems I already had "all local files" on, but changing it back to just "my files" seems to have no effect. I tried "clear orphan file records" and it nearly instantly completed without finding any.
>>17706 >For now, your best bet is the Client API Managed to figure it out, thanks. I used gallery-dl to download the metadata for all the files, gathered the md5 and tags from the metadata, searched up the md5 in the API and got the sha256, then added the tags to the sha256.
Hi, I didn't use Hydrus (Linux version) for three months, and after update to the latest version I noticed the following: when you start a selection in file manager (e. g. press shift and repeatedly press → to select multiple files) the image preview is freezing at selection begin, but the tag list is reflecting your movements. Old behavior is that both preview and tag list were changing synchronously.
>>17698 >>17706 Okay, thanks for the response. When the development is finished, I assume there will be an announcement. I had considered the VNC option. I'm not sure who's developing the app, if it's you or someone else, but do you know if it will be like a remote control of hydrus on a host computer, if it'll be a kind of a port of existing hydrus, or if it'll have functionality of both options? I'm also curious as of an approximate timeframe as well.
>>17693 I got it work via URL like that thouhg Hydrus url import page: https://www.pixiv.net/ajax/user/YOURPIXIVID/illusts/bookmarks?lang=en&limit=48&offset=96&rest=show&tag= I didn't try to change the limit key (was afraid of ban), so whole process was page by page - increasing offset by 48 every input of URL
>>17712 update: Hydrus finally booted, thank god, however it's completely empty. All the files are still on my HDD I can check, hydrus just seems to have forgotten about them. I suspect it might have also forgotten about pretty much all other settings as well, such as my thumbnails and files drive location. (thumbnails on ssd, files on hdd, originally, as suggested)
>>17713 Would I be able to do a "restore from a database backup", select my old, now seemingly "unlinked"/"forgotten" db, and proceed?
The release will only be recommended for advanced users! Regular users please check back next week. I had a great week. I fixed several things, improved some quality of life, and added a new service to the database to make managing multiple local file services a bit easier. The release should be as normal tomorrow. >>17712 >>17713 >>17714 Damn, this is not good. Your options structure has, yeah, been damaged, which means that client.db was affected too. Did you get lots and lots more 'this module was missing some tables' warnings? If your client sees no files, then it sounds like your core file table was damaged as well. This sounds stupid, but please check file->open->database location to make sure the client is pointing at the right location. In the off-chance that somehow your db folder has been set to read-only due to drive damage, it might redirect to a different location and would appear to be a brand new database. EDIT: There is an odd thing here I can't explain--your options structure was destroyed, and presumably the database made a fresh one. If this is true, it should not have a database backup location stored. If you made a backup previously, I think hitting 'restore from a database backup' is the correct answer here. Since everything is very damaged, I would not do this in the client, but externally, and make sure you keep everything. Something like this: - Go to install_dir/db - Move the damaged client*.db files somewhere safe. - Go to your db_backup folder (this used to be something like install_dir/db/db_backup, but it could be somewhere else. Search your system for "client.master.db" if you aren't sure where) - Copy the client*.db files from the backup folder to your install db folder. - Try to boot Make sure you don't delete anything, and make sure your temporary folders are labelled so you don't lose track of anything. I am not sure what has happened. You seem to have had some really bad database damage, and this may ultimately need some more focused back and forth. Let me know how you get on, and if you like, please email me or DM me on discord and we can get into it more closely. You've been reading 'help my db is broke.txt', but I'll just reiterate--please make sure your SSD is healthy, in case there are ongoing read errors here or something.
>>17715 Alright lemme give just a little more context to the current state of things then. This is how my setup [b]used[/b] to be set up client.exe in (SSD) E:\Hydrus Network\ thumbnails in (SSD) E:\Hydrus Network\thumbnails\ files in (HDD) F:\Hydrus Network Files\files\ (from f00 to fff) after this whole fuckery happened, I manually checked and all files remained in their place and continue to be fully intact and viewable from the file explorer, and also able to be opened and viewed without a fuss. Coming home from work I checked and it seems my suspicions were right. All my settings were reset to default, including the default file locations so for example were I to save a picture from 8chan it would by default put it in: E:\Hydrus Network\db\client_files\ There are currently no files actually saved in this location. It's empty. To clarify I didn't "create a backup" before this, but since my previous files in (F:) still remain there completely fine and viewable I was wondering if I could simple instruct hydrus to "look here for pictures" basically. At this point I don't care about tags, watches, and all that stuff, I'm just glad my files are safe and I want to get hydrus back in shape where it's useable for me.
>>17716 PS.: It's as if hydrus had uninstalled then reinstalled itself. Quite bizarre...
>>17716 >>17717 Yeah, this is very odd. If you had not posted about the malformed errors and the problem loading the serialised options object, I would have guessed that your database files had been accidentally deleted. If the client boots with no 'client.db' file in the db directory, it assumes this is first start and creates a fresh one. That would give the symptoms of resetting your file locations back to install_dir/db/client_files. I am sorry to say I think your client.db probably was eviscerated in some way, almost certainly a very bad hard drive event, or something external--like a crazy anti-virus program, or it might be a cloud backup process--removed or broke the file. In any case, I am sad to say I think your best bet is to move everything in your 'db' folder to a safe location and start again. The current database is either damaged or strange and can't really be trusted going forward. Make a new database and import the files in F:\Hydrus Network Files\files\ in batches. You can't go 'just look here and get the files', unfortunately, but you can import them manually no problem. If there are things like inbox status you want to try to save from the old database, I can help with that, but it will require some time and complicated manual SQL to do. Let me know what you miss. This situation sucks, but if your files are safe, that's great. Once you are feeling better about your situation, please check out how to maintain a backup of your client: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
https://www.youtube.com/watch?v=ZUrcYKghr-Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Linux.-.Executable.tar.gz I had a great week working on a variety of smaller issues and some important database updates. The release this week is only recommended for advanced users. I make an important change, and I want to make sure the update works quickly and without problems before I roll it out to everyone. If you are not an advanced user, please check back in next week! The update will also take a few minutes this week. all my files So, I have made a new virtual service, 'all my files', which covers the union of all your local file services. This service is very similar to 'all local files', but it does not include trash or repository files. It provides a bunch of tools across the program for quick and precise searching of all the files that have value and are worth looking at. When you update, this new service will be created and populated. It will take a few minutes, longer if you have millions of files and tags. My 2.8-million-file ptr-syncing client took 32 minutes. There are progress updates on the splash window. Once you are booted, you will see 'all my files' in review services and the file domain selector if you have more than one local file domain. Feel free to play around with it--it will run a lot faster than previously going 'multiple locations' and unioning all your local file services. The code is working really well on my end, and I am not afraid of anything being damaged, but if something goes wrong, it may require some clever/slow regeneration to fix. The main things I would like to know are: 1) Did your update take significantly longer than ~100k files/minute? Did it get held up on anything? 2) After some use, have you noticed any file/tag miscounting with 'all my files'? As always, make a backup before you update. other highlights The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. The database now cleans up after itself more thoroughly. Some users have been having trouble with very large 'WAL' files, some getting to be multiple GB, and perhaps seeing bloated memory use along with it. A set of new maintenance routines now force write-flushing at regular intervals. In my testing, there is no lag related to this, but I will be interested to hear if anyone gets new commit hang-ups during very heavy work. If you have had a huge WAL, let me know if this helps! full list - This week's release is for advanced users only! I make a big change, and I want to make sure the update is fast and there are no unusual problems before rolling it out to all users. - all my files: - the client adds a new virtual file service this week, 'all my files', which is an umbrella covering all your local file domains. if you do not engage the multiple local file services system, you won't see it much, but if you do, you'll now have a convenient tool for saying 'all my stuff' without including trash and repository updates - it will take a minute or two to generate this new service on update. if you have a client with millions of files, it may take a while - 'all my files' now appears in the file domain selector button on your tag entry box if you have more than one local file domain. selecting this searches the union of all your local file domains with fast and precise count (as opposed to 'multiple locations' of the full union, which will have imprecise counts and be slower). it also does duplicate file work laser-fast (again, unlike 'multiple locations', which is often slow due to UNION complexity) - 'all my files' also appears in review and manage services, very similarly to 'all local files' - a heap of hacks I instituted when getting multiple local file services ready are now replaced with this clean 'yeah this file is valued and worth looking at' domain. for instance, downloader pages now view files in this way. - mr bones and the file history chart also use 'all my files', and are significantly faster to calculate. the chart also excludes repo update files and trash now
[Expand Post]- calls to delete or undelete on 'all my files' (this is mostly Client API and some 'default' situations) will be converted to a blanket 'force send to trash' and 'force undelete all deleted records' - the 'undelete files?' dialog is now a button selection dialog. it also now has an 'all the above' option when more than one local service may apply, which tells the client to undelete to all services the files have been deleted from - updated multiple local file services help to talk a little about the new domain - rearranged the sort in a couple of places where the different local file services appear. they should now be: local file domains, all my files, trash, repo updates, all local files - ADVANCED: the 'presentation import options' under 'file import options' now allows a full-fledged location context using the new multiple local file services system rather than the previous 'in your files(and trash too)' choice. it defaults to the new 'all my files' domain
- misc: - thanks to a user, the 'getting started with downloading' help has had a full pass. if you have had trouble with downloaders, particularly if you are unsure about what file import options are for, or what subscriptions are, please check it out! - the 'media viewers' shortcut set gets three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max' (issue #1141) - if a media type is set to do 'exact zooms', it will now not exceed the otherwise specified max zoom - the file sort widget will now preserve ascending/descending status on sort type changes (rather than resetting to default) if the asc/desc strings do not change. so, if you are on 'import time'/'oldest first', and switch to 'archive time', it will now stay on 'oldest' rather than resetting to 'newest' - the manage tag siblings dialog now tries to automatically break loops for you, just like it will automatically break A->B, A->C conflicts. this works on manual entry or mass import - the manage tag siblings dialog now shows the stated 'reason' for any pair change (e.g. "AUTO-PETITION TO BREAK LOOP") in the 'note' column - the 'short' animation scanbar--when your mouse is away--now keeps a short disabled volume button beside it. I found it very annoying how the scan nub would jump a few pixels left/right as this popped up and down, so now it is the same width big and small - right-clicking on files when in pages with 'multiple locations' file domains is now much much faster - the filename tagging dialog now starts with the 'tags for all' focused, and the 'press up/down on empty input' shortcuts are now plugged in, so pressing up/down will change service - I believe I may have completely eliminated the additional superlag that sometimes occurs when adding or deleting a service. it was a database maintenance routine getting carried away with other outstanding work - move/add actions in the new multiple local file system now operate asynchronously and politely, spreading their work time out when the client is busy, and for large jobs they will also make a cancellable progress popup - cleaned up how the autocomplete entry sends some of its signals to other parts of the program - did some misc help and code edits/refactoring, including brushing up the Windows install section with more advanced options - removed the 'hydrus zooms big bad' warning from the 'media' options page. hydrus zooms big good now! - . - some database stuff: - tl;dr: database cleans up after itself better now - some users have had trouble with database journal files (the 'wal' files in your db directory) on certain clients getting huge after lots of work, multiple GB, and causing the OS a headache if the journal is doing work through a computer sleep. these journals are 'supposed' to checkpoint and clean themselves up naturally, but I think a busy database chokes them. therefore, I have improved the hydrus maintenance this week: 1) the 'journal size limit' PRAGMA, which applies softly after every 30 seconds or so, is now 128MB down from 1GB. 2) databases in PERSIST (rare) mode will now specifically zero out their journal fifteen minutes. 3) databases in WAL mode (the default), in addition to regular PASSIVE checkpointing now every five minutes, will force an additional TRUNCATE checkpoint every fifteen. this should force a regular full flush and maybe help some other problems like gigantic memory bloat the same users sometimes saw. if you are a very advanced user and do active debug on the database while hydrus is using it, please note this new TRUNCATE command is aggressive and may block itself or you inconveniently. let me know how you get on! - moved the recent 'be careful of usb drives' section in 'installing' help to 'help my db is broke.txt'. it is very likely this problem was related to the above WAL stuff, and it was not just usb drives, I rewrote it as generalised help for anyone who gets 'delayed write failed' errors at the OS level - massively optimised several critical duplicate files filtering methods if the current location context has more than one file domain, and I think I cleared out the basic 'get duplicate info for this file' call of all slow calls in complex location contexts - the repair routine that regenerates mapping caches if any tables are missing on boot is now more reliable and covers the entirety of the mappings cache system using the new modules system. it also now regenerates just for the tag services with missing tables, not the whole cache - if multiple types of mapping cache tables are missing on boot, and multiple waves of regenerations covering different areas are planned, duplicate regenerations will now be skipped next week Beyond some more multiple local file services work--probably client api updates--next week is a 'medium size' job week. I want to plough some time into better en masse import/export tools for tags and other metadata. I'm not sure how far I will get, but I want a framework sketched out so I can start hanging things off it in future.
Can Hydrus have audio WavPack (.wv) files support, even only just for storing, not playback? That will be a good addition to the already available .flac and .tta.
down the line this will probably be obsolete, but before than it will help quite a bit. with duplicates, when they are pixel matches, is there a way to either set the lower file size one to be green and the bigger one to be red? its already this way with jpeg vs png ones, but same vs same just has both as blue, and with a pixel duplicates there would never be a reason to choose the larger file size. for me I want the duplicate deciding process to be as speedy as possible, at least with these exact duplicate ones, and I have been watching things while doing this, however and this may be my monitor, unless im staring straight at the numbers, they kind of blend, making 56890 all kind of look alike, requiring me to sit up and look at it straight on. I think if the lower number was green on exact dupes it would speed the process up significantly, at least until an auto discard for exact dupes (and hopefully this takes the smaller file size as the better pair) gets implemented and we no longer have to deal with exacts. I don't know if this would be simple to implement, but if it is, it would be much appreciated.
I'm trying to download a thread from archived.moe and arciveofsins.com but it keeps giving errors with a watcher and keeps failing with a simple downloader. it seems like manually clicking on the page somehow redirects to a different link then when hydrus does it.
>>17606 >In terms of metadata, hydrus keeps all other metadata it knows about the file. If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? Also, what does telling Hydrus to forget previously deleted files actually remove if it still keeps the files' hashes? I don't feel comfortable (or desperate) enough to use the method you gave, but I also don't want to go through the trouble of exporting all my files, deleting the database, reinstalling Hydrus, and then importing and tagging the files all over again.
My autocompleted tag list displays proper tag counts, but when I search them I get dramatically less images. I can still find these images in the database through system:* searches and they're still properly tagged. My tag siblings and parents aren't working for some tags either. But all the database integrity checks say everything is okay. What's my next step?
Still getting some errors in the duplicate filter, I think it has something to do when I'm choosing to delete images v485, win32, frozen IndexError list index out of range File "hydrus\client\gui\ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3598, in ProcessApplicationCommand command_processed = CanvasWithHovers.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1581, in ProcessApplicationCommand self._Delete() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2928, in _Delete self._SkipPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3488, in _SkipPair self._ShowNextPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3442, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ):
>>17707 I have had a report from another user about a situation a bit similar to yours related to the file service that holds repository update files. I am going to investigate it this week, please check the changelog for 487. I can't promise anything, but I may discover a bug where some files aren't being cleanly removed from services at times and have a fix. >>17709 Yes, hit up options->gui pages and check the new preview-click focus options. Note that shift-click is a bit more clever now, too--if you go backwards, you can 'rewind' the selection. >>17710 Yeah, I like to highlight neat new apps in the release posts or changelogs. I do not make any of the apps, but I am thinking of integrating 'do stuff with this other client' tech into the client itself, so you'll be able to browse a rich central client with a dumb thin local client. Timeframe I can't promise. For me, it'll always be long. I'm expecting my 'big' jobs for the next 12-18 months to be a mix of server improvements, smart file relationships, and probably a downloader object overhaul. I'll keep working on Client API improvements in that time in my small work, and I know the App guys are still working, so I just expect the current betas to get better and better over time, a bit like Hydrus, with no real official launch. Check in again on the links in the Client API help page in 4-6 months, is probably a good strategy.
>>17721 Sure, just point me to some example files (or send me some) and I'll see if it is easy to recognise them. >>17722 Yes, I want to write some special rules that you can customise for pixel dupes. Some users always want the bigger file, some the smaller, so I'm planning to make the current weights you see in options->duplicates a bit richer, and probably add some '- unless they are pixel dupes, in with case use [ 123 ] [ ] do not care if pixel dupes' side options. >>17723 Can you paste any of the errors, so I can see more information? They should be in the 'note' column of the search/file log on the downloader page, and you can copy them with right-click menu. I don't know much about those sites, but if they have complicated redirects or login requirements, or Cloudflare rules, maybe to stop spiders, the situation may be more tricky than the simple downloader can handle. If it is a login situation (i.e. lots of cloudflare problems or 403/401 errors), then maybe Hydrus Companion's ability to copy your browser's login cookies to hydrus via the Client API may help. https://gitgud.io/prkc/hydrus-companion
>>17724 >If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? It depends on what 'OK' means, I think. If you want to remove the hash record, sure, you can delete it if you like, but you might give yourself an error in two years when some maintenance routine scans all your stuff for integrity or something. Renaming the hash to a random value would be better. Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. Telling hydrus to remove a deletion record only refers to the particular file domain where the file was deleted from. It might still be present in other places, and other services, like the PTR, may still have tags for it. It basically goes to the place in the database where it says 'this file was deleted from my files ten days ago' and removes that row. If you really really need this record removed, please don't rebuild your whole client. Make a backup (which means making a copy of your database), then copy/paste my routine into the sqlite terminal exactly, then try booting the client. If all your files are fucked, revert to the backup, but if everything seems good, then it all went correct. Having a backup means you can try something weird and not worry so much about it going wrong. More info here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
>>17725 The nuclear way to fix this sort of problem, if it is a miscounting situation, is database->regenerate->tag storage mappings cache (all, deferred...). If the bad tag counts here are on the PTR, this operation could take several hours unfortunately. If the tags are just your 'my tags' or similar, it should only be a couple of minutes. Once done, you'll have to wait for some period for your siblings and parents to recalculate in idle time. But even if that fixes it, it does not explain why you got the miscount in the first place. I think my recommendation is see if you can find a miscounted tag which is on your 'my tags' and not on the PTR in any significant amount. A 'my favourites' kind of tag, if you have one. Then regen the storage cache for that service quickly and see if the count is fixed after a restart. If it is, it is worth putting the time into the PTR too. If it doesn't fix the count, let me know and we can drill more into what is actually wrong here. >>17726 Damn, thank you, I will look into this.
>>17730 This seems to have fixed it, thank you! However, it's left quite a few unknown tags. I guess those tags were broken, which was the problem in both my counts and parent/siblings. Is there any way to restore those "unknown tag" namespaced tags, or is it better to just try to replace them one by one?
(739.29 KB output.zip)

>>17728 Here is some samples of WavPack from the web: https://telparia.com/fileFormatSamples/audio/wavPack/ But just in case I attached short random laugh compressed with recent release of encoder on Linux. Format seems have magic number "wvpk" as stated on wikipedia or github repo: https://github.com/dbry/WavPack/blob/master/doc/WavPack5FileFormat.pdf
Will it be possible at some point to edit hydrus images without needing to import it as a brand new image? It's annoying opening images in an external editor, making the edit, saving the image, importing said image, transferring all the tags back onto it, and then deleting the old version when all I'm doing usually is cropping part of it.
I had an ok week. I didn't have time to get to the big things I wanted, but I cleared a variety of small bug fixes and quality of improvements. The release should be as normal tomorrow.
>>17726 Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work.
How long until duplicates are shown properly? Also, are transitive duplicates sorting (as in files which aren't possible duplicates but have duplicates in common) in the to do list?
https://www.youtube.com/watch?v=VKuGYKkH3oA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Linux.-.Executable.tar.gz I had an ok week. I was a unexpectedly short on time, so I couldn't get everything I wanted done, but I cleared out some small work. highlights The big update last week, which I recommended only for advanced users, went well. There don't seem to be any obvious problems with the logic of the new search cache, so I now recommend it for everyone. You will be presented with a popup just before the update runs, giving you an estimate of how long it thinks it will take. Most users should take 5-10 minutes, but if you have millions of files, it will be longer. Just let it run and some things will run a bit faster and neater in the background. If you have played with 'multiple local file services', then check out the new 'all my files' domain you will see--this is basically an efficient umbrella of all your local file services. It works super fast for things like the duplicates system. I also put some time into the duplicate filter this week. The logic of the queue is improved again, so some rare errors when reaching the end of a batch should be fixed. I also integrated manual file deletes into the queue processing: now, when you manually delete a file, or both, the deletes will not happen until you commit--just like the other decisions you are making--and they are undoable if you select 'forget' or go back a pair. You also won't see a file you manually deleted again in a batch (it'll auto-skip if that file comes up again). Also, the duplicate filter now has a little 'send pair to page' button, which publishes the current pair to the duplicates page that made the filter, just in case you want to save them for some extra processing after you are done filtering. You can do this with multiple pairs and they'll just stack up in the page. A couple other neat things happened in last week's advanced-user-only release, which I will repeat here: The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. full list - misc: - updated the duplicate filter 'show next pair' logic again, mostly simplification and merging of decision making. it _should_ be even more resistant to weird problems at the end of batches, particularly if you have deleted files manually - a new button on the duplicate filter right hover window now appends the current pair to the parent duplicate media page (for if you want to do more processing to them later) - if you manually delete a file in the duplicate filter, if that file appears again in the current batch of pairs, those will be auto-skipped - if you manually delete a file in the duplicate filter, the actual delete is now deferred to when you commit the batch! it also undoes if you go back! - fixed a bug when editing the external program launch paths in the options - fixed an annoying delay-and-error-popup when clearing the separator field when editing a String Splitter. now the field just turns red and vetoes an OK with a nicer error text - also improved how string splitters report actual split errors - if you are in advanced mode, the _review services_ panels now have an 'id' button that lets you fetch the database service id - wrote a new database maintenance routine under _database->check and repair->resync tag mappings cache files_, which is a lightweight way of fixing ghost files or situations where files with a tag are neither counted nor appear in file results. this fixes these problems in a couple minutes, so for this it is much better than a full regen of the cache - . - cleanup and other boring stuff: - the archive/delete filter now says which file domain it will be deleting from - if an archive/delete filter is launched on a 'multiple locations' file domain, it is now careful to only make delete records for the deleted files for the file services each one is actually in - renamed the 'default local file search location' option to 'fallback' and updated its tooltip a bit. this was really a hacky thing I needed to fill some gaps while rewriting from 'my files' to multiple local file services. the whole thing needs more attention to become more useful. I also fixed an issue where it could become invalid 'nothing' if you deleted a file service it was referring to (issue #1155) - I think I fixed a rare 'did not find info for that file' style problem when highlighting some watchers/downloaders - I think I have silenced some unhelpful BeautifulSoup (html parser) warnings that were spamming to the log in some situations - updated last week's big update to work with TRUNCATE journalling mode. I will be doing this for other big updates going forwards, since multi-GB WAL transactions cause problems for some users - last week's update also gives a time estimate in its pre-popup, based on 60k files per minute - removed some old database cache data that wasn't cleared in a previous update - a variety of misc UI text fixes and cleanup
[Expand Post] next week I regret I did not have time for a larger import/export framework. It will have to wait. I have one more week of work before my vacation week, so I will try to just do some small cleanup and polishing so the release is 'clean' before my break.
>>17728 nice, hopefully the rules come soonish, would make going through them a bit easier, definitely want to check out some things in 487 as they are things I made work arounds for like pushing the images to a page, I currently have a rateing that does something similar when i want to check the file a bit closer, be it a comic page I want to reverse search or something I want to see where it came from, this may be a better option.
>switch to arch linux from windows >get hydrus running >use retarded samba share on nas for the media folder >permission error from the subscription downloader >can view and search my images fine otherwise, in both hydrus and file manager Any idea which permissions would be best to change? I'm retarded when it comes to fstab and perms, but I know not to just run everything as root. I just can't figure out if its something like the executable's permissions/owner, the files permissions/owner, or something retarded in how I mount it. Pictured are the error, fstab entry, the hydrus client's permissions, and what the permissions for everything in the samba share are. The credentials variable in fstab is a file that only root can read, for slight obfuscation of credentials according to the internet. The rest to the right was stuff I added to allow myself to manipulate files in the samba share, again just pulled from random support threads.
>>17735 >Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work. Appears fixed for me with v487 - Thanks.
Perhaps, another bug?: >file>options>files and trash>Remove files from view when they are sent to trash. Checking/Unchecking has the desired result with watchers and regular files but does not seem work anymore with newly downloaded files published to their respective pages. Here, the files are merely marked with the trash icon but not removed from view, as it had been the case (for me) until version 484.
>>17739 It seems like I can manipulate files within the samba drive but it spits out an error when moving from the OS drive to there. So I guess it's some kind of samba caching problem.
I have noticed some odd non-responsiveness with the program. It is hosted on an SSD. While in full-screen preview browsing through files to archive or delete, sometimes the program will stop responding for approximately 10 seconds when browsing to the next file (usually a GIF but not always). The next file isn't large or long or anything. I'm not sure what's causing this issue. Is it just the program generating a new list of thumbnails?
>>17743 I also wanted to note this issue is not unique to this most recent update. It has been there for a while.
>>17743 >>17744 I guess I should also reiterate that the program AND the database are both hosted on the same drive (default db location)
well this is a first, the png on a pixel for pixel against a jpeg was smaller... i'm guessing that jpeg is hiding something.
>>17731 Ah shit, if you have 'unknown tag:abcdef...' garbage, this is strong evidence that you have actually had database damage (to client.master.db), most likely through a hard drive blip. This probably also explains why your searches were jank--your 'client.caches.db' was probably damaged as well. I don't think there is a way to figure out which original tags those 'unknown tag:blah' actually referred to, at least no simple easy one. Basically when the client tried to rebuild your cache, it found gaps in the definition table and filled them with random but valid data. Your next step is to read the 'help my db is broke.txt' document in install_dir/db directory. This has background reading about the nature of hard drive problems and things you should do to check your drive is ok and your database files are ok. If you have a recent backup, hold on to it! If you have a backup, we may be able to recover your bad tags. But before then, make sure everything is safe now and there aren't more problems. Let me know how you get on! >>17732 Thank you! I'll see what I can do. >>17733 I hope that as the duplicate system gets more tech, this will be more possible. Hydrus works on exact file content, so it will never natively support editing, but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech, including for other conversions like jpegxl, waifu2x, or video re-encoding. For now, though, hydrus is really for 'finished' files.
>>17735 >>17740 Great, thanks for letting me know. >>17736 I expect to do a big push on duplicates in Q4 this year or Q1 2023. I really want to have better presentation, basically an analogue to how danbooru shows 'hey, this file has a couple of related files here (quicklink) (quicklink)'. Estimating timeframes is always a nightmare, so I'll not do it, but I would like this, and duplicates are a popular feature for the next 'big job'. At the moment, there is a decent amount of transitive logic the duplicates system. If A-dup-B, and B-dup-C, then A-dup-C is assumed. Basically duplicates in the hydrus database are really a single blob of n files with a single 'best' king, so when you say 'this is better' you are actually merging two blobs and choosing the new king. I have some charts at the bottom of this document if you want to dive into the logic some more. https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced But to really get a human feel for this, I agree, we need more UI to show duplicate relationships. It is still super technical, opaque, and not fun to use. >>17739 >>17742 I'm afraid I am no expert on this stuff. The 'utime' bit in that first traceback is hydrus trying to copy the original file's modified time from a file in your temp directory to the freshly imported file in the hydrus file system, so if the samba share has special requirements for that sort of metadata modification, that's your best bet I think.
>>17741 Thank you, I will check this! The specific rules that govern when it is and isn't correct to apply this option are frustratingly complicated, and adding multiple local file services made it moreso. I'll have a play and see what I can figure out. >>17743 >>17744 >>17745 Thank you for this report. Sometimes this is my fault, sometimes it is something else. Since your files are on a fast SSD, we can rule out some weirder things like NAS directory scan times, but I do know that Windows anti-virus got a lot more aggressive in the past couple of years, and pretty much any file you access gets a scan before it is loaded. This can cause a ~50-150ms delay on some video files in hydrus, which are not pre-cached yet. Maybe maybe if anti-virus was working hard, and search indexer was also going bananas as it sometimes does, and your client was working hard doing imports and things, all the locks would add up and it would halt. 10 seconds sounds like a bigger problem though. Can you try turning off the 'normal time' sync under tags->sibling/parent sync->sync during normal time? Does that free you up a bit? You can check the 'review' panel on that same sub-menu to see if your client has a lot of catch-up work to do there. But that's probably only applicable if you sync with the PTR. Do you have a lot of imports, btw? Like do you have 25+ active file import queues, be they downloaders or hard drive imports or whatever, running at once? It could just be the file system is overwhelmed with new writes and can't serve you the read request for the gif. Otherwise, please check help->debug->profiling->profile mode. There's a 'what's this?' on the same menu to show you how to use it. Run that for a bit and see if you can capture a freeze, then pastebin or email me the profile log and I'll see if anything helpful was recorded. >>17746 Wow, yeah, that's the first time I have seen that too. I assume this image is just an anime babe or something and nothing like a crazy geometric pattern? If you are ok sharing, I'd be interested to either have that jpeg or get a link to a booru it is on, just so I can check it out myself. No worries if you don't want to share. There's probably some EXIF browsing programs out there that might be able to expose it. Another trick is just to export, rename to .zip, and see if 7zip will open it. Some hidden archives are literally just appended to the end of the image file data.
>>17745 >>17749 There is no downloading or synching being done. Client is basically running stock, with no tags or anything (not even allowed to access the internet yet). Think it might be AV? Running Kaspersky on Low (uses very little resources for automated scanning).
>>17750 >>17749 Also, no active running imports. Just an open import window with about 60k files for me to sift through.
>>17751 >>17749 I tried it with an exclusion for the entire Hydrus folder for automated scanning but the problem persists so I don't think its AV related.
Would it be possible to add a sort of sanity check to modified times to prevent obviously wrong ones from being displayed? I've noticed a few files downloaded from certain sites since modified times were added to Hydrus show a modified time of over 52 years, which makes me think that files from sites which don't supply a time are given a 0 epoch second timestamp. In this case I think it would be better to show a string like "Unknown modification time" or none at all.
>>17753 Also, if I try to download the same file from a site that does have modified times, the URL of the new site is added but the modified time stays the incorrect 52 years. Maybe there could be an option to replace modified times for this query/always if new one found/only if none is already known (or set to 1970). I also couldn't find a way to manually change modified time, but maybe I didn't look hard enough.
I've gotten my instance of Hydrus into a state where the "parent/sibling sync" process is stuck. I have several parent/child pairs that were working fine, and running on ~v450, but recently I added a few more and after applying realized some were the wrong way around parent/child-wise. I went back in and edited the parent tag configs to delete the bad ones and re-add them with the tags the right way around. But it seems my instance has stopped processing the tag updates. tags > parent/sibling sync > review parent/sibling maintenance showed it was aware there was more work to do, but stayed stuck at the same percent done for over 12 hours, even when I clicked the "work hard now!" button and had it set to sync "all the time" (not just during idle time). I used the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to go back to zero percent done, but now has not progressed passed zero percent done for over 24 hours. I'm not sure the "maintenance" is even doing anything, as the Hydrus client process in task manager isn't using much CPU/RAM/Disk process at all. I upgraded to v487, but no change in symptoms. This instance has 85 parent configs set, 5,000 files in it, has no subscriptions/services/downloaders, and is only using local tags, running on Windows 10. The client log seems to have no errors that seem related to a parent/child sync issue, but one error does pop up on each startup: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\metadata\ClientTagsHandling.py", line 514, in MainLoop self._controller.WaitUntilViewFree() File "hydrus\client\ClientController.py", line 2279, in WaitUntilViewFree self.WaitUntilThumbnailsFree() File "hydrus\client\ClientController.py", line 2284, in WaitUntilThumbnailsFree self._caches[ 'thumbnail' ].WaitUntilFree() KeyError: 'thumbnail' File "threading.py", line 890, in _bootstrap File "threading.py", line 932, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 416, in run HydrusData.ShowException( e ) File "hydrus\core\HydrusData.py", line 1215, in PrintException PrintExceptionTuple( etype, value, tb, do_wait = do_wait ) File "hydrus\core\HydrusData.py", line 1243, in PrintExceptionTuple stack_list = traceback.format_stack()
>>17749 I would send it to ya but I dumped the trash before I saw your response, so far I have seen a few of these, if I find another ill send it to ya.
>>17755 Update on this issue: I tried exporting all my parent tags, then deleting all the parent tag configurations and using the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to indicate there's no work to do. I then added back in one parent tag from my original set (that only applied to 5 files in the repository) and the "maintenance" window says there's now one parent to sync, but isn't actually processing that one parent.
>>17750 >>17751 >>17752 Hmm, if you have a pretty barebones client, no tags and no clever options, then I am less confident what might be doing this. I've seen some weird SSD driver situations cause superlag. I recommend you run the profile so we can learn more. >>17753 >>17754 Thanks, can you point me to some example URLs for these? I do have a sanity check that is supposed to catch 1970-01-01, but it sounds like it is failing here. The good news is I store a separate modified time for every site you download from, so correcting this retroactively should be doable and not too destructive. I want to add more UI to show the different stored modified times and let you edit them individually in future. At the moment you just get an aggregated min( all_modified_times ) value.
>>17755 >>17757 Damn, this is not good. I'm sorry for the trouble and annoyance. Have you seen very slow boots during this? That thumbnail cache is instantiated during an early stage of boot, so it looks like the sibling/parent sync manager is going bananas as soon as it starts. I have fixed the bug, I think, for tomorrow's release. That may help your other issue, which is the refusal to finish outstanding work, but we'll see. Give tomorrow's release a go, and if it gets to a '95% done' mode again and won't do the last work, please try database->regenerate->tag parents lookup cache. While the 'storage mappings cache' reset will cause the siblings and parents to sync again, the 'lookup' regen actually does the mass structure that holds all the current relationships. It sounds like I have a logical bug there when you switch certain parents around. You don't have to say the exact tags if you don't want, but can you describe the exact structure of the revisions you made here? Was it simply flipped parent-child relationships, so you had 'evangelion->ayanami rei', and it should have been 'ayanami rei->evangelion'? Were there any siblings involved with the tags, and did the parent tags that were edited have any other parent relationships? I'm wondering if there is some weird cousin loop I am not detecting here, or perhaps detecting but not recognising as creating outstanding sync work. Whatever the case, let me know how you get on with this!
I had a good week. I did some simple work to make a clean release before my vacation. The release should be as normal tomorrow.
>>17759 Yes, I did have a few very slow startups: a few times it took like two hours for the UI to show, though I could see the process was indeed started in task manager. Thanks; I'll try tomorrow's release and see if that helps anything. Parent-tag-wise, the process I think I was doing right before it failed was I had a bunch of things tagged with something generic, which had one level of namespacing (e.g. "location:outdoor"), and I decided to make a few more-specific tags (e.g. "location:forest", "location:driving", and "location:beach"; all of which should also get "location:outdoor" as a "parent"). But I first created the parent relationship the wrong way and didn't notice it (so everything that was "outdoor" would now get three additional tags added to it). I saved the parent config and started manually re-tagging (e.g. remove "outdoor" and add "beach" for those that were in that subgroup), and after doing a few I noticed the F3 tagging window wasn't showing the "parent" tag yet (wasn't showing "outdoor" nested under "beach"), and so I went back to the tag manager and realized they were wrong, so deleted the relationship and re-added them the right way and continued re-tagging. After a while I noticed it still hadn't synced, and realized it didn't seem to be progressing any more, and started triaging to see if it was a bug. None of them had siblings defined.
>>17758 >Thanks, can you point me to some example URLs for these? It looks like this is only affecting permanent booru. I'm using pic related posted in one of these threads. Here's a SFW example URL: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/post/3742726/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa It may be of note that the "direct file URL" is from IPFS, and the following onion gateway URL is added to the file's URLs as well: http://xbzszf4a4z46wjac7pgbheizjgvwaf3aydtjxg7vsn3onhlot6sppfad.onion/ipfs/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa The same file is available here with a correct modification time (2022-02-27): https://e621.net/posts/3197238 The modified time in the client shows 52 years 5 months, which is in January 1970. Not sure if there's an easy way to see the exact time.
>>17747 >but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech Couldn't you just make a temporary "import these files and use _ as _ to find alternates, then do _ if _" for now? Like "import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller"? I mean it sounds like too much when you write it out like that, but the underlying logic should be pretty simple.
https://www.youtube.com/watch?v=AQOfIENN2tk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Linux.-.Executable.tar.gz I had a good simple week making a clean release before my vacation. Everything is misc this week, nothing earth-shattering, just a bunch of cleanup and little stuff. If you have any wavpack files, try importing them! full list - the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too! - added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action - the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them - simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash - if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up - the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules - now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click. - did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area - all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly - may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates - misc UI text rewording and layout flag fixes - fixed some jank formatting on database migration help next week I am now off for a week. I think I need it! I'm going to play a ton of vidya, shitpost the big streams that are happening, fit some Wagner in, and get on top of outstanding IRL stuff. I'll be back to catch up my messages on Saturday the 18th. Thanks everyone!
trying to use Hydrus for the first time; is there a way to add subscription for videos specifically? So that it leaves out photos?
(480.53 KB 640x360 shitposting.gif)

>>17764 Have a nice vacation OP and watch out for fucking normies.
id:6549088 from gelbooru. (nsfw) with download dec. bomb deactivated. When downloading this specific picture, before it finishes downloading, it makes the program jump to 3 gb of ram until i close it. Is opens normally with browser, but spikes to 3 gb on hydrus. and since i only have 4 gb it makes the pc freeze. Just wanted to report on that. Also, no native enflish speaker here.
>>17767 forgot, using version 474
>>17761 Reporting in that v488 seems to have fixed both these bugs. There's no longer the thumbnail exception being logged, the startup time to get to a UI window is quicker, and the parent-sync status un-stuck itself. Hooray!
>>17747 This is about what I figured. I pulled the database from a dying hard drive a few months ago. Every integrity scan between now and then ran clean, but I had a suspicion something had gotten fucked up somewhere along the line. Since it's been a minute, any backups are either also corrupted, or too old to be useful. Luckily, re-constructing them hasn't been too painful. I made an "unknown tag:*anything*" search page, then right-click->search individual tags to see what's in them. Most have enough files in to give context to what it used to be, so I'll just replace it. It's been a good excuse to go through old files, clean up inconsistent tags, set new and better parent/sibling relationships, etc, so it's actualy been quite pleasing to my autisms. I had 80k files in with an unknown tag back when I started cleaning up, and now I'm down to just under 40k. I'm sure I've lost some artist/title tags from images with deleted sources, or old filenames, but all in all, it could be much worse.
Thanks man! Have a good vacation!
>>17765 if you're just subscribing to a booru, they will generally have a "video" tag. you can add "video" to the tag search.
>>17772 nope, not a booru. So there isn't a way to filter that. awh.
Is there any way to get Hydrus to automatically tag images with the tags present in the metadata? Specifically the tags metadata field, why whole collection was downloaded using Grabber.
>>17773 What website is it? You might be able to add to/alter the parser to spit out the file type by reading the json or file ending, then use a whitelist to only get certain file endings (i.e. videos)
I've been using hydrus for a while now and is in the process of importing all my files. Is there any downside to checking the "add filename? [namespace]" button while importing? Think i got over 300k images so it would create a lot of unique tags if that would be a problem.
About how long do you estimate it might take before hydrus will be able to support any files. I specifically need plaintext files and html files (odd, I know) if that makes a difference. The main thing is just that it'd be nice for me to have all my files together in hydrus instead of needing to keep my html and (especially) my text files separate from the pics and vids. Also. I'm curious. Why can't hydrus simply "support" all filetypes, by just having an "open externally" button for files that it doesn't have a viewer for? It already does that for things like flash files, afterall.
>>17739 >>17742 >>17748 It seems to be working now, not sure what changed but somehow arch doesn't always mount the samba directory anymore and needs a manual command on boot now, which it didnt before. Maybe it was some hiccup, maybe some package I happened to install as I installed more crap, maybe it was a samba bug that got updated.
Is there a way to reset the file history graph, under Help?
>>17761 >>17769 Great, thanks for letting me know! >>17762 Thank you. The modified date for that direct file was this: Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT I thought my 'this is a stupid date m8' check would catch this, but obviously not, so I will check it! Sorry for the trouble. I'll have ways to inspect and fix these numbers better in future. >>17763 I'm sorry to say I don't understand this: >"import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller" But if you mean broadly that you want some better metadata algebra for mass actions, I do hope to have more of this in future. In terms of copying metadata from one thing to another, I just need to clean up and unify and update the code. It is all a hellish mess from my original write of the duplicates system years ago, and it needs work to both function better and be easier to use
>>17765 >>17772 >>17773 >>17776 In the nearish future, I will add a filetype filter to 'file import options', just like Import Folders have, so you'll be able to do this. Sorry for the trouble here, this will be better in a bit! >>17767 >>17768 I'm sorry, are you sure you have the right id there? gif of the frog girl from boku no hero academia? I don't have any trouble importing or viewing this file, and by it looks it doesn't seem too bloated, although it is a 30MB gif, so I think your memory spike was something else that happened at the same time as (and probably blocked) the import. Normally, decompression bombs are png files, stuff like 12,000x18,000 patreon rewards and similar. I have had several reports of users with gigantic memory spikes recently, particularly related to looking at images in the media viewer. I am investigating this. Can you try importing/opening that file again in your client and let me know if the memory spike is repeatable? If not, please let me know if you still get memory spikes at other times, and more broadly, if future updates help the situation. Actually, now I think of it, if you were on 474, I may have fixed your gigantic memory issue in a recent update. I did some work on more cleanly flushing some database journal data, which was causing memory bloat a bit like you saw here, so please update and then let me know if you still get the problem. >>17770 Good luck!
>>17774 Not yet. I don't inspect EXIF much yet, but I expect some sort of retroactive parser in future. Or I wouldn't be surprised if a user figures out a Client API tool to do this. Unless you mean NTFS tags, in which case I am even less expert. I know there are some tools that can convert NTFS tags into xml files, and I know a user once did that and then munged those files into .txt files for tag import, but I've never done of that stuff myself. >>17777 If you do this, make a new tag service for your filename tags under services->manage services->add->local tag service. Call it 'filenames' or something. The downside is these tags are messy. 300k tags won't add much lag, maybe 0.5-2% slower file load kind of thing. But they will get in the way, and most users find they don't actually want them all that often. Putting them in another service puts them in a little box on their own where it is easier to hide, compartmentalise, and potentially delete them in future without affecting your 'real' search tags. >>17778 Not sure. It is number 6 on the 'big stuff' list here: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc 'Support more filetypes / arbitrary file import ', so I see it happening, and very likely within the next three years. I also want to store my .txt and .html files. I have thousands of fanfics from a sordid past life of mine that I want to categorise. The problem I need to overcome is that hydrus is currently predicated on the ability to infer a filetype based on file content alone. The reasons for this are some bullshit technical stuff mostly related to maintenance and weird downloads, but it is currently needed. If you toss a file called 'file.file' at hydrus, it needs to be able to figure out if it is a jpeg or mp4 just looking at its insides. Most media files have rigid formats, literally a few bytes at like 'offset 8 bytes, WEBP', that make it easy to recognise them very quickly. Text and HTML have very dynamic content, so figuring out what they are is more tricky. Before I allow all files, I may be able to straight up support text and html, but there will still be problems. HTML is doable since you basically run it through a parser and see if it raises an error, but then you have to determine if it was HTML or XML. I expect to start work on this soon since some formats (SVG and some other open-source image-editing formats) are just XML, so I'll start recognising the broad category of XML and then try recognising keywords or these little XML 'this is what I am' tags, I think they are called DTD or something, and we may just fall into HTML support by happy accident. Raw .txt is much more difficult. A one-byte text file with 'a' is a text as much as a book in japanese unicode is. I probably can't recognise that versus any other arbitrary format, although I can probably get a high confidence guess. Supporting arbitrary files will require some import and maintenance rejigging. I'll have to no longer know that I can always figure out the mime of a file, and I'll have to pass the mime along from the original file extension or whatever. ALSO there are secondary issues like, at the moment, if the hydrus downloader runs into an HTML file when it expected a jpeg (e.g. some kind of fucked up 404 message that gave 200 instead of 404, which happens sometimes), I raise an 'ignored' error and say 'I think this downloader needs to be taught how to parse this document'. But when we can support HTML, what do I do then? Do I import the HTML error page as a file? I'll have to do something to the import workflow in general to say when text/html is ok and not ok. I'm leaning towards allowing text files on hard drive import, and then disallowing it on downloader import unless the URL Class specifically specifies it, but how the hell I make that user-friendly I'm not sure yet. Anyway, sorry, I went on a bit there, but that's the basic background. It will come, but it will be a big job, so I need to clear out some other things first. I'm basically done with multiple local file services now, so I'm moving on to some server updates and janny workflow improvements for my next big job. We'll see if that takes me all the rest of this year, but I hope I can clear it out faster, and then move on to the next thing.
>>17779 Great, let me know how things go in future! >>17780 What part would you like to 'reset'? All the data it presents is built on real-world stuff in your client, like actual import and archive times. Do you want to change your import times, or maybe clear out your deleted file record?
I had a good week. I did a mix of cleanup and improvements to UI and an important bug fix for users who have had trouble syncing to the PTR. The release should be as normal tomorrow.
when trying to do a file relationship search, is there a way to search for same quality duplicates. I don't see any way to do that, and every time I look at the relationships of a file manually, it's always a better/worse pair. Does Hydrus just randomly assign one of the files as being better when you say that they're the same quality?
https://www.youtube.com/watch?v=6rboksqjPy4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Linux.-.Executable.tar.gz I had a good week getting back into the swing of things. I fixed some important bugs and improved some UI. highlights All the downloader pages--gallery, watcher, urls, and simple--have a revamped status system. All the text that shows how file or gallery downloads are going is now generated in a better way, with more error states (e.g. it will tell you when your gallery stopped because it hit the file limit, or when one of the emergency pause states under the network menu has kicked in), and logic in edge cases is improved. Everything is unified now, so the texts are the same across all pages. Also, if a gallery query or watched thread is 'pending', its text now reports that it is waiting for a work slot, rather than staying blank. There _shouldn't_ be any situations now where a downloader is unpaused with work to do but has blank status. If you use the multiple local file services system, the archive/delete filter now presents more options when you are done. If the files are in more than one local file service, you can choose where you delete them from, including all applicable. This was confusing and opaque before, so I hope this makes it more clear what is happening and gives you more choice. I _believe_ I have fixed an important bug some users were having with PTR processing. There was an annoying issue about a 'definitions' file being seen as a 'content' file, or vice versa, that the automatic maintenance could not fix. I finally managed to reproduce the issue and fixed it. I schedule a fix in the update this week, so if you have been hit by this, please wait for one more round of file maintenance 'metadata' scans, and then unpause the PTR one more time. Essentially, I think I fixed the automatic maintenance. Let me know how you get on! full list - downloader pages: - greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner - the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending! - when you pause mid-job, the 'pausing - status' text is generated is a little neater too - with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI - any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds) - the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however - a bunch of misc downloader page cleanup - . - archive/delete: - the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible - fixed archive/delete commit for users with the 'archived file delete lock' turned on - . - misc: - fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed - the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately - optimised the master tag update routine when you petition tags - the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record - thanks to a user, the 'getting started with files' help has had a pass - I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear - . - important repository processing fixes: - I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues - I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising - also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed - there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this
[Expand Post]next week I ended up doing more cleanup this week than I expected, but I'm happy to have the downloader pages reporting better. They were a real knot before. I want to spend a little admin time next week, triaging final multiple local file services work and planning future server improvements for when that is done, and then I think I'd like to focus on more small jobs, including some github issues.
>>17786 Yes, 'same quality' actually chooses the current file to be the better, just as if you clicked 'this is better', but with a different set of merge options. The first version of the duplicate system supported multiple true 'these are the same' relationships, but it was incredibly complicated to maintain and didn't lend itself to real world workflows, so in the end I reinvented the system to have a single 'king' that stands atop a blob of duplicates. I have some diagrams here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced I don't really like having the 'this is the same' ending up being a soft 'this is better', but I think it is an ok compromise for what we actually want, which is broadly to figure out the best of a group of files. If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. I may revisit this topic in a future iteration of duplicates, but I'm not sure what I really want beyond much better relationship visibility, so you can see how files are related to each other and navigate those relationships quickly. Can you say more why you wanted to see the same quality duplicate in this situation? Hearing that user story can help me plan workflows in future.
(426.25 KB 958x538 64b0.png)

(8.03 KB 503x125 ClipboardImage.png)

>>17601 What do I do for this? I'm just tyring to have my folder of 9,215 images tagged.
What installer does Hydrus use? I'm trying to set up an easy updating script with Chocolatey (since whoever maintains the winget repo is retarded).
>>17791 Figured it out, Github artifacts shows InnoSetup. Too bad Chocolatey's docs are half fucking fake and they don't do shit unless you give them money. This command might work, but choco's --install-arguments command doesn't work like the fuckwads claim it does. choco upgrade hydrus-network --ia='"/DIR=C:\x\Hydrus Network"'
(11.25 KB 644x68 ClipboardImage.png)

>>17792 No, actually, that command doesn't work, because the people behind chocolatey are lying fucking hoebags. Seeing this horseshit, after THEY THEMSELVES purposfully obfuscated this bullshit is FUCKING INFURIATING.
>>17788 The main thing I wanted to do is compare the number of files that were marked as a lower-quality duplicates across files from different url domains with files that aren't lower-quality duplicates (either kings, or alts, or no relationships) to see which domains tend to give me the highest ratio of files that end up being deleted later as bad-dupes, and which ones give me the lowest, so I know which ones I should be more adamant about downloading from, and which ones I should be more hesitant about. This doesn't really work that well if same-quality duplicates can also be considered "bad dupes" by hydrus, because that means I'm getting a bunch of files in the search that shouldn't be there, since they're not actually worse duplicates, but same-quality duplicates that hydrus just treats as worse arbitrarily. Basically, I was trying to create a ranking of sites that tend to give me the highest percentage of low-quality dupes and ones that give me the lowest. I can't do that if the information that hydrus has about file relationship is inaccurate though. It's also a bit confusing when I manually look at a file's relationships, because I always delete worse duplicates, but then I saw many files that are considered worse duplicates and I thought to myself "did I forget to delete it that time". Now this makes sense, but it still feels wrong to me somehow.
(2.77 KB 306x117 windozeerror.png)

>>17793 >2022 and still using windoze Time to dump the enemy' backdoor.
>>17790 The good catch-all solution here is to hit up services->review services and click 'refresh account' on the repository page. That forces all current errors to clear out and tries to do a basic network resync immediately. Assuming your internet connection and the server are ok again, it'll fix itself and you can upload again. >>17791 >>17792 >>17793 Yeah, Inno. There's some /silent or something commands I know you can give the installer to do it quietly, and in fact that's one reason the installer now defaults to not checking the 'open client' box on the last page, so some automatic installer a guy was making can work in the background. I'm afraid I am no expert in it though. If I can help you here, let me know what I can do. >>17794 Ah, yeah, sorry--there's no real detailed log kept or data structure made of your precise decisions. If you do always delete worse duplicates though, then I think you can get an analogue for this data you want. Any time you have a duplicate that is still in 'my files', you know that was set as 'same quality', since it wasn't deleted. Any time a duplicate is deleted, you know you set it as 'worse'. If you did something like: 'sort by modified time' (maybe a creator tag to reduce the number of results) system:file relationships: > 0 dupe relationships then you switch between 'my files' and 'all known files' (you need help->advanced mode on to see this), you'll see the local 'worse' (you set same quality) vs also the non-local worse (you set worse-and-delete), and see the difference. In future, btw, I'd like to have thumbnails know more about their duplicates so we can finally have 'sort files by duplicate status' and group them together a bit better in large file count pages. If you are trying to do this using manual database access in SQLite and want en masse statistical results, let me know. The database structure for this is a pain in the ass, and figuring out how to join it to my files vs all known files would be difficult going in blind.
>>17795 >Unironically being that guy Buddy, you just replied to a reply about easier updating with something that would make it ten times harder. Not to mention that hilariously dated meme. >>17796 Yeah, Choco passes /verysilent IIRC, and /DIR would work, but Powershell's quote parsing is fucking indecipherable, Choco's documention on the matter is outright wrong, and I can't 'sudo' in cmd. I'm considering writing a script to just produce update PRs for the Winget repo myself, since it's starting to seem like that would be easier, but I don't want to go through all of Github's API shit.
Pyside is nearly PyPy compatible (see https://bugreports.qt.io/browse/PYSIDE-535). What work would need to be done in Hydrus to support running under PyPy?
I noticed that the API method /add_tags/search_tags can only be limited to specific tag services, not specific file services. So with the API, if I want to do some wholesome searching for "curly_hair" images in my SFW file domain and I type "cur", then the NSFW favourite "cursed_tag" will appear among the results even though no images tagged with "cursed_tag" are within the SFW file domain to be retrieved. If we could do something like "/add_tags/search_tags?file_service_name=sfw", then that would hopefully let the privacy/safety level of the available tags match that of the available files. The only other way I thought about handling this was through tag migration to a separate NSFW tag service, but that would need constant updating to make sure all the new "cursed_tags" are filtered out as they get added to "all known tags" as new images enter the collection. On the other hand, file domains only change when new files are added and removed, so the existing pool of tags within them are less vulnerable to surprises.
I had a good week. I did a variety of small work and one important bug fix that should speed up media browsing for power users. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=LBzE9JMoCeE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Linux.-.Executable.tar.gz I had a good week working on a variety of small jobs. highlights While working with a user, I discovered I had recently messed up the initialisation of the image caches, causing them to use default values and be too small for many power users. This is fixed, so I hope you will get some smoother media viewing, especially when it comes to very large images. I tidied up some weird file service bugs and annoyances that came from the multiple local file services transition. Fixed some weird 'delete from x' entries on the thumbnail menu, stopped spamming 'all my files' in places it wasn't helpful, figured out better service ordering, just some simple workflow stuff. And I think I have fixed another PTR processing bug some users had. If you have had 'this update file was missing' errors that wouldn't fix themselves, please try again and let the automatic maintenance run one more time--it should repair your records this time. And just a fun thing--Mr Bones now has more numbers, and more neatly laid out. full list - misc: - fixed a stupid bug that meant the image caches were initialising with default values (as under _speed and memory_) until you opened and OKed the options dialog (or did some other options-refresh events). sorry for the trouble, please enjoy some smoother image browsing. - mr bones now shows more numbers, and in a neater table. it should be clearer what the percentages are for now, too - the _manage->regenerate_ thumbnail menu has additional quick maintenance commands for presence and integrity checks and regenerating data in the similar files system - wrote a new 'special duplicate' button for the edit shortcut set dialog. the list on this dialog doesn't allow duplicates (which meant the old 'duplicate' button was doing nothing), so this duplicates the current actions with 'incremented' shortcut keys. 'a' becomes 'b', 'ctrl+5' becomes 'ctrl+6', and so on. it doesn't always work, but if you want to make ten shortcuts for setting rating 1-10, this should help - fixed an issue where the thumbnail banner text and the media viewer background text was not changing size or font according to QSS stylesheet rules (issue #1173) - SIGTERM should now cause a clean program exit (previously it killed the GUI App but left some daemon threads alive for thirty seconds or more). unlike SIGINT, it will not ask you if you are sure you want to exit or if you would like to do shutdown maintenance--it just closes the client promptly - fixed a bug in last week's importer page status improvements--the hard drive import page wasn't showing all the updates it should have - brushed up some backup help - . - file services: - fixed a bug where advanced users could set 'all known files'/'all known tags' on a search dropdown. this search domain is not supported - in the archive/delete filter, if the current location is 'all my files' and the files being deleted are only in one local file domain, the surplus 'all my files' will no longer appear at the top of the filter's commit dialog - the file services in the thumbnail select/remove menu are now sorted in the same order as the file domain button in search dropdowns - the thumbnail select/remove menus now exclude 'all my files' and 'all local files' if those choices are redundant (e.g. if you only have files in 'my files', 'all my files' will be hidden) - fixed some incorrect 'delete from x' actions appearing in thumbnail right-click menus - . - orphan files: - there's a persistent processing bug some users have where some update files are missing but they won't redownload correctly. I think I fix that this week naturally so existing maintenance routines will now be able to fix it themselves after another round - fixed some issues related to deleting files from the repository updates file domain. - the 'clear orphan file records' maintenance command now fixes the 'all my files' umbrella services as well as the 'all local files' one. it also has nicer description, does some additional file-removal cleanup, and triggers a file recount if problems are found - moved 'clear orphan files' to the 'files' maintenance menu next week Next week is a medium size job week. I want to have another go at building a larger metadata import/export pipeline. I want to start unifying tags, urls, ratings, everything into one thing that can eat up or spit out XML or JSON.
(139.62 KB 1200x1200 4ae.jpg)

>>17801 >flash hider Time to go all the way for what's ours.
The AUR is now 2 weeks out of date. Can someone flag it as such?
>>17803 The maintainer says he gets notifications from GitHub on each release, not sure how much it would affect anything. I did it for you anyway with a throwaway account.
Is there a log file of searches made? I had opened a search for an artist earlier, then closed the hydrus client. Now I can't remember what the artist name was! Arghh! Thanks
For a few versions, I haven't been able to seek in animations with the bar at the bottom (but videos work fine). If it's no bother, could this issue be looked at?
Also, is there a way to delete exact (pixel exact) duplicates, by telling hydrus to keep the one with the most tags, and delete any others? Lol, so I don't have to go through 15000 of them :p Thanks
>>17804 Thanks. The new version is up now, so maybe it made a difference?
>>17807 Seconding this.
>>17807 This, but by deleting the one with the larger file size
A couple bugs I want to report. On 488, apologies if something was fixed in the next versions. 1. When I start or stop the client API service, my entire session reloads. Every search page I have up does its search again. This causes quite the slowdown if I have a lot of search pages open. 2. After I updated from 482 to 488, I noticed that the media viewer sometimes kind of "gets stuck" on the previous image. It's noticeable if one image is much wider than the other - for just a moment, the first image will be moved so that its left side is where the left side where the second image would be. It doesn't happen very often, but it's a little annoying. It might just be because my session is too big or my laptop is old, though. Thanks, hydrusman.
>>17810 Well, it won't be larger, because I was talking about exact pixel duplicates. i.e. the same exact file.
(21.20 KB 273x500 black people.jpg)

(146.86 KB 273x500 black people.png)

>>17812 No, it only means the pixels are the same. You can't have two of the same exact file in Hydrus. Pics related are all exact pixel duplicates and have different file sizes, this can happen within the same format as well (i.e. PNG with level 1 compression vs level 9 compression).
>>17812 >exact pixel duplicates. i.e. the same exact file There is some shit tool that losslessly recompresses JPEG files, Pixiv started using it on all uploads 2 years ago, Danbooru fags disliked that. PNG is lossless on the pixel level, you can recompress that anytime. JPEG is lossless on macroblock level (stored as YCbCr), but software usually asks for RGB pixels from the decoder which causes slight differences when sent to recompress. Also, this fucking website tries to strip metadata so I always find duplicates in my downloads that had a "Paint Tool -SAI-" or "Adobe Photoshop" string removed.
How do I get Hydrus to display smileys in tags? Like in this username https://www.pixiv.net/en/users/16968790
>>17798 I am afraid I don't know much about PyPy, so I can't talk too cleverly. In terms of libraries, OpenCV is usually a pain, and in our case that means 'opencv-python-headless' on pip. I see on their homepage they say they support twisted and numpy, which are the other two nightmares that usually pop up. Others that do some funky things or are otherwise too integral to the program to ever replace: beautifulsoup4 Pillow python-mpv requests This page https://www.pypy.org/compat.html suggests everything works, but you don't get the JIT benefit unless you reshape your code? So maybe hydrus would work out of the box, but my weirdass code wouldn't get the benefit, perhaps? If you give this a go when PySide is ready, let me know how it goes and if I can do anything simple to help out! >>17799 Yep, thank you. I hope to add complex file location definitions to the Client API soon as part of finishing up multiple local file services. Should be within next couple of months. >>17805 No, sorry. I don't do a huge amount of logging of your actions. You might like to try doing searches for 'system:time' and then hitting the 'last viewed' tab. Anything you viewed in the last couple days, that sort of thing, may give you a refresher. Note that you need to look at an image for a few seconds for it to be registered in this system.
>>17806 Thank you for this report. I am sorry for the trouble. Can you talk more about this? I am afraid things seem to be working fine for me, so I need more info to be able to reproduce it. Under options->media, how are 'animations' set to display? Is it the native viewer, or mpv? Is it that in both media and preview windows? When you move your mouse down to the scanbar, what happens? Is it normally about three pixels tall and then jumps up to about thirty tall? What happens when you click the scanbar in an animation? Does the caret block move to where you click, or does nothing happen at all? Does the animation pause when you click, how does seeking not work? By animations, do you mean gifs? How about this apng I have tried to attach to this post (forgive if 8chan munges it)? >>17807 >>17809 >>17810 >>17812 >>17813 >>17814 Now that I rewrote the guts of the dupe filter, this is my dream for the next step of the duplicates system, and I hope to have something working well before the year is done. We can search the database for pixel dupes, so I think I can make a system to automatically resolve their pairs in certain situations. I am going to start with the easiest resolution first, which is 'A is a jpeg, B is a png'. In this case, A is always better, since B is a Clipboard.png of A that someone posted once. Once I have a workflow that does that simple decision well, I am going to hang more bells and whistles on it so it can make more decisions on things like 'yeah I want the larger one, always' and move to wider pairs than just pixel dupes. Eventually I hope to figure out a metric of similarity, so we can say something like 'if I download a file and there is a file in the client that looks more than 97% similar, but the new file is less than 80% quality/size/resolution, ditch the new file'. The most important part of this system will be that it is optional and configurable. As this short convo shows, people have radically different ideas on what is worth keeping and what is better, so I want to let you set the automatic rules that work for you. Same with the metric of similarity. Some people will only be comfortable with pixel duplicates, some will be happy with 99.8% similar, some will want to churn through things a bit faster at 95%, or whatever.
>>17811 Thank you for these reports. 1) Are you sure the searches actually refresh--like if you hit F5--or could it be just that the thumbnails are being reloaded? When you make a big service change, it triggers an options reload, and some graphical things refresh. This gets on another guy's nerves, I know, so I will make it not do this. 2) Yeah, this is some bullshit, and I don't know exactly what is going on. I had hoped a cache fix in 490 would have relieved it back to how it was a few months ago, but I'm still noticing it sometimes. You might notice things are better if you update, not sure. It mostly hits when you move from an image to a video or vice versa. There's a long-time layout problem at the core of this that I need to fix, so I also have a plan for that. I do know about the bug, and I will keep working on it, sorry for the trouble. I'm also going to have to dive into the code and figure out what I changed recently that made this worse. >>17815 Those characters are valid unicode, so they should just propagate just like any other text. I don't know a huge amount about the pixiv downloader (I'm not an IRL user of the site), so maybe it is cutting those characters off somehow? EDIT I tested it, the tag seems to parse ok in the downloader. Do you have any trouble with it? Secret tip if you are a Windows user: Hit Win+Period and you get their new actually-cool weird-character-enterer charmap replacement. 🔱
>>17817 >Now that I rewrote the guts of the dupe filter, this is my dream for the next step of the duplicates system... Thanks! That's what I'm looking for! I know I have a TON of dupes in my database.
I want to subscribe some artist tag on *booru (with ~200 images), and I already have the good-for-me bits downloaded do some import-manually dir - Is there a way to mark this subscription as up-to-date, so these not needed images won't be retrieved?
>>17818 I am on a newly installed Linux/PopOS, so it is possible I am missing a package. I do have ttf-ancient-fonts though. In Hydrus all I see is little squares. If I copy it to a text editor I do get to see the smileys. So it looks to me like it doesn't see the font. If I copy the artist name from Pixiv and put in in Hydrus it will not show me smileys either, but it does find the artist. Version 452 btw, might be that, but it worked fine on my Apple.
Any chance for a configurable larger page pre-fetch for thumbnails? Since memory usage is trivial it'd be especially useful to have the thumbs for a full page load in the background for instant viewing, vs the 1-2 second delay when scrolling.
>>17801 Nice, I've always liked Mr. Bones. Now I'm confident in my claim that 99% of everything is shit. I've started to download lots of files from kemono.party that don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help Also just noticed I've been using my current database for over a year now. I recently found an old backup of a database from before then but assumed they couldn't be combined. Anyway, thanks for continuing to improve this extremely useful program. >>17817 >A is always better, since B is a Clipboard.png of A that someone posted once. This is almost always the case, but I have seen at least one PNG that was a pixel dupe of a JPG and was smaller somehow. I don't know what the image was and probably deleted them anyway, but remember doing a double take. Not sure which should be considered better in a case like that.
>>17823 99% deleted? Holy shit. >don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help I have an import page with ~4000 files with this exact issue... I don't think you have much to worry about if you're capable of deleting 200,000 files... But are basic tags even necessary when you've only got 1700 files? >>17818 >Are you sure the searches actually refresh--like if you hit F5--or could it be just that the thumbnails are being reloaded? When you make a big service change, it triggers an options reload, and some graphical things refresh. Yes. If I make a search, remove some thumbnails from the page, and then start/stop the client API service, "Loading..." appears at the bottom, the stop button next to the search box appears, and eventually when it finishes the thumbnails I removed are back. When I just press apply on the normal options menu, the thumbnails reload but the search isn't done again.
for some reason my japanese ime doesn't work at all in any hydrus text boxes. even when I'm switched to it, my keyboard just inputs normal roman characters. I'm also not able to switch to and from the ime and normal text input when in hydrus windows. It's like it gets blocked or ignored. Hydrus is the only program so far that's given me this problem so I don't think it's the input method that's the issue. I tried that special insert mode but it still doesn't work. the input method I'm using is fcitx5 with kkc. I tried using mozc instead of kkc but that still doesn't work. and I'm on the linux version of hydrus if that's important.
I was ill this week and am short on work time. I will spend tomorrow doing some more normal work instead of the release. 491 should be on the 13th of July. Thanks everyone!
>>17825 I noticed this as well, ibus+fcitx5. I remember it working at some point in the past. Not a huge problem for me, I alias the most common Japanese tags to English ones because it's less of a hassle to search for English text either way.
>>17827 Retard moment, I actually use fcitx5+mozc.
>>17828 Hum. My fcitx +mozc is working well I remember fcitx5 is still wacky try using fcitx
Is there a way to search for files that have been previously deleted and re-imported? I know that Hydrus has this information because if I try to delete a file that I have deleted in the past with the advanced delete menu it has "keep previous reason" available. Something like this could help cleaning up an inbox if you accidentally left "exclude deleted files" off on a search. Maybe not useful for the average user but for someone like me it would be pretty useful. If there isn't, maybe something like "system:previously deleted" could be added.
When using the duplicate filter is it possible to copy ptr tags to my tags when copying from worse to better?
could you add a number next to the rating shapes for what rating you gave a file. It's kinda difficult to determine the rating of a file on a 20 point (plus 0 so 21) rating service just by the filled in shapes alone. In fact, I'd say that once you get past 10 shapes, it should probably be represented by just a number. But just having a number in addition to the shapes is fine with me. Another solution could be to represent 20 shape rating services by just being a 10 shape service that allows half-filled shapes. So a rating of 1 is a single half filled shape, 2 is 1 full shape, 4 is 2 full shapes, and so on until 20 is all 10 shapes filled. The only issue with this is that at a glance, it means that empty 10 and 20 shape rating services look the same. This could be solved with the little number next to the shapes I was just talking about, or it might not even need to be solved really, because a given filled up number of shapes on a "20" shape rating service, should about match the same rating as the same number of shapes filled in on a 10 shape service. By that I mean that 10/20 (5 shapes filled) is about equal to 5/10 (also 5 shapes filled). Of course there's also the solution of just splitting the shapes into 2 rows, but I have a feeling you didn't do that because it would make 1 rating service look like 2 different ones. Anyway that was a bit of a tangent. I really just came here to ask for the little numbers to make the rating easier to read, but that other stuff could be cool too.
(33.09 KB 945x218 2022-07-09_10-39-54.png)

I accidentally imported a couple of files into Hydrus that actually belonged somewhere else, managed by a separate application. I did not remove them from the Hydrus db, I just had the other application move the files to where they should be. In retrospect, I should have anticipated that Hydrus would take issue with this, and would not just silently and unquestioningly ignore these missing files. Whoops. Anyway now I have pic related. What can I do about it? How do I fix this outside of Hydrus?
>>17820 Not a simple way, but it isn't something to worry about too much. Not all boorus--but most--will offer 'md5' hashes on their site, which hydrus understands, and allows it to recognise if it already has a file and skip the download. For those sites that don't, you'll only have to do the surplus download once and then hydrus won't hit that URL again, so you are talking a couple hundred MB wasted at most, normally. If that amount of data is a problem for your internet connection, then I think you'll want to create your new subscription with very small 'first time' and 'periodic' file limits. I think the default is 100/100, so try instead setting it to 5/5, which will stop it reaching too far into the past. Come back in a month and raise it to 25/25. That's a bit hacky, so you are probably babysitting it for a bit no matter what. >>17821 Ah, yeah, if you get little squares, then your OS/Qt can't generate the right characters given your fonts. You might see a difference between the tag in your tagbox vs the tag in a thumbnail banner, like in my picture. iirc, they actually use slightly different graphics engines, so sometimes Qt can figure out a unicode character on the thumbnail banner when it can't on the taglist. Try to get some more font support for your OS, I guess? Normally any new OS can usually figure this stuff out. But I don't know anything about PopOS or Linux font stuff, I'm afraid. Or, if you are open to some ugly technical work, you could play around with the font settings in your QSS files in install_dir/static/qss (and options->style). Maybe one font can do it, but another can't? As a side thing, what about some more normal unicode, like these characters: 日本語 Do they show in a tag ok? I'd expect you can show those ok, but not the emoji things, and this is basically just a limited utf-8 character set in that font. >>17822 Good idea, thanks!
>>17823 This is cool. You are probably the most discerning user I have seen, well done! For the jpeg/png thing, I was working with a user the other day with a situation like this. It turned out the giganto jpeg actually had a fucking ton of Adobe garbage in its metadata. It was some layer/element definitions stuff that had somehow been embedded in the jpeg header. I'd still generally say the jpeg was 'better' in that case, since it was original, but a stripped jpeg that was pixel perfect would probably be better again. I am biased here, though--I harbour a deep hatred for pngs of busy raster graphics. As always, any of these systems will be highly customisable and optional, I know feelings differ. >>17824 Damn, thanks. I'll try to work on this this week, let me know if I fix it! >>17825 >>17827 Ah, damn, thanks for letting me know. My tag input has some weird focus stuff going on, so maybe that is conflicting with a new version on IME entry, or maybe I changed something on my end recently. Can you try two things for me? 1) Does IME work in a really generic text box, like the one options->gui->application display name? That's a boring text box that does nothing special, so if IME doesn't work there, this may be a Qt problem, not a hydrus problem. 2) Does IME work if you hit options->search and disable the 'autocomplete ... float ...' options? You'll have to restart the client to get it to kick in, but it will embed the autocomplete dropdown box into the page. This solves several weird autocomplete problems. If this is a Qt problem, I hope that we will be testing out Qt6 in the next few months, and that will have a lot of bug fixes, so that may be the time to revisit this.
>>17830 I am not sure, but I don't think so in the UI. When it is able to repopulate the 'keep previous reason' part, that is kind of by accident, and I am not sure how rigorous the database has been about keeping those records. I think for some periods, the full delete data has been removed, including previous delete reason. You can search deleted files, using 'system:file service', or by using the file domain selector button and selecting 'multiple locations' in help->advanced mode, but that searches the actual deleted file records. That stuff is always cleared out when you re-import. I don't officially keep track of which files have been re-imported after a delete. If you wanted to hack a solution to try and infer this data, you could do it using SQLite. You'd fetch all the 'current' hash_ids and intersect with all the hash_ids that had a deletion reason record. Let me know if you want to do this for a larger technical job. Unfortunately this stuff wouldn't really be compatible with anything in the UI, though. >>17831 No, I don't think so. It just does intra-service copying atm. Maybe in the future. If you are feeling clever, you could probably do this manually by doing a search for 'system:file relationships >0 dupes' and 'system:file relationships: is not the best quality file of its duplicate group' and then going ctrl+a, and then F3, and then in the manage tags window clicking cog button->migrate tags for these files, and then you'd be able to send PTR tags to 'my files' just for them. If you try this, make a backup beforehand, just in case it goes wrong. 'migrate tags' is powerful and dangerous. >>17832 Thanks, these are good ideas. Ratings are still pretty much on version 1.0 of their UI, and I should really get around to improving them and adding more display options. In the meantime, a really quick thing I can do is just say the rating number on their tooltip, so at least there is one way to read it.
>>17833 This dialog is not actually worried about missing files, but entire missing folders, so there may be something bigger going on here. If you go to install_dir/db/client_files, you should see 512 folders. 256 start fxx, 256 start txx. That dialog is worried that some of those are missing. Did your other application move or delete the entire folders? If it did, then a lot of files are moved somewhere The solution here is to move the fxx or txx folders back to hydrus's proper location and boot again. If you cannot recover the missing sub-folders, then you should create empty ones named according to what that dialog says, and then start the process of recovering your missing files. The document at install_dir/db/help my media files are broke.txt is good background reading here. If you are missing all 256 fxx folders, then you have a very serious problem. Let me know. Anyway, if you just have a couple missing files, hydrus doesn't mind too much. It'll boot ok, but if you try to load the files into thumbnails, it'll then notice and give you some polite error popups, and then you can start fixing it. As for the 'may be something bigger going on here', maybe this other application moved or deleted sub-folders, but if you know it wouldn't have, then you have a big deal problem with your hard drive. Again it depends on how serious the dialog you posted a screenshot of is. If you are just missing one 'txx' location, it isn't so bad, but if you are missing fifty three different places, then you have had a serious hard drive problem. help my db is broke.txt, also in the db directory, may be other useful background reading here, if you need to check your drive is healthy. Let me know how you get on!
>>17835 >Does IME work in a really generic text box, like the one options->gui->application display name? It doesn't. Trying to switch layouts to the ime just inputs a space, seemingly bypassing the shortcut (which for me is Super+Space, but I also tried Ctrl+Space before and that didn't work either) and just entering the raw text. If I manually switch to the japanese input outside the window, it just continues to input the raw characters. No converting to japanese characters. No underline. No suggestion box popping up. >Does IME work if you hit options->search and disable the 'autocomplete ... float ...' options? disabling those 2 options also seems to have no effect before or after restarting. >If this is a Qt problem, I hope that we will be testing out Qt6 in the next few months, and that will have a lot of bug fixes, so that may be the time to revisit this. that'd really suck but if that's the case then it's out of your hands so oh well I guess I'd have to wait.
Hey guys, I accidentally downloaded some pics, but forgot to check the grab tags box, so now they don't have any tags. Is there anyway to download the tags to these files without having to redownload the pics?
>>17839 I don't think hydrus ever actually redownload the file if you already have it in the database, just the page. in the tag import options, there should be an option to force page fetching even if hydrus recognizes the url. Turn that on temporarily and that should be what you want.
>>17840 Thanks, I'll try that. I wonder if there is a way you can just select the files, and then say "redownload these", and then set the "force page fetching", etc. to get whatever tag data, etc. you wanted without having to redownload the pic itself.
It appears that the option to disallow deleting archived files is buggy. It just silently ignores attempts to delete the file even if that's what you really want. By that I mean stuff like the duplicate filter will say that it's going to delete the file, but then nothing happens because the file is archived. Another example would be where you select some files to send to the trash, but the ones that are archived are just silently not deleted. I feel like instead of silently doing nothing, it'd be better if it could let you know that nothing happened because the file is archived and either cancel the whole operation or just the deletion of the archived files. or even better, just pop up an extra confirmation if you want to trash the archived files too. I like the safety of having something stopping me from accidentally deleting files I didn't want to delete, but I feel like this implementation is too simple and leads to annoying circumstances.
Hmm, I can see the files I need to regenerate tags for, I just don't know if there is any way to select them and tell hydrus to redownload the tags for them. Using system:untagged
>>17840 Thanks. That did work for a majority of the files.
>>17843 Ok, I figured out how to do this using Hydrus Companion. Search for all the files that have no tags, using System: Number of tags, 0. Select all the files, and tell hydrus to open them all in tabs on your web browser, using right click, Known URL's, Open. This will open them all in tabs on your web browser. Using Hydrus Companion, select "Send all tabs to Hydrus". This will import all the urls to Hydrus, including tags. You can now delete all the untagged images in hydrus, because hydrus now has duplicates of them that are tagged.
Was trying to use the ! operator in hydrus, but it's not working. Hit the *OR button in search, all the other operators seem to work, like &&, but ! doesn't. Neither does - or not. For instance, johnny -test, or johnny !test doesn't work. Putting a space between the operator and the tag doesn't work either.
>>17823 >I've started to download lots of files from kemono.party that don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help personally I have a 10 star rating for archive, If I encounter the exact same file 11 times (they start with 0 stars) and don't get rid of it, it moves into archive, at my discretion certain files move to archive without the 11 encounters. I find this speeds up parsing quite a bit along with a 'delete later' button that just sets a delete rating and moves to the next image, this way knee jerk decisions are encouraged, and I can re parse the delete via thumbnails before I fully delete.
>>17835 if it ever comes to a "these files are the same" I always, without any hesitation, go for the smaller file because there is no reason not to at that point. with that said, is there any way for the program, if we do get auto delete functions, for it to figure out if any of the files have something hidden in them? or even just a general check for all images. I know I have found ones with video/audio/rars hidden in them, and am kind of paranoid about that.
>>17846 confirming that this doesn't work for me either. The advanced searching must be bugged.
>>17849 Thanks for confirming. It used to work, but something broke.
There's this issue with nested tabs where, where using the shortcut to move to the next or previous tab, if you move to a tab that has subtabs (so a page of pages), the innermost row of tabs will capture the focus and cause the shortcuts to move between tabs in that innermost row instead of at the higher level that you were at before. Could you fix that behavior so that the shortcuts keep you at the level you're on when moving left and right between pages. Maybe you could also add 2 more shortcuts to move up and down a level for when you do want to move between rows.
I had a good couple of weeks. I overhauled an ancient system behind the scenes and did a heap of little jobs, fixing a bunch of bugs and improving quality of life. The release should be as normal tomorrow.
Maybe I'm missing something here, but when downloading a gallery with the "download -> url import"-type page, next pages aren't recognized / downloaded (the search log stops at the first page and states "1 successful"). The link would be parsed with the test parse of the parser; I can see the "next page url (priority 50)" in the results window. Is this intended / am I simply missing something / do I really need to make a gug just to get my next page? Thank you!
>>17853 Network > Logins > Manage Logins Make sure your logged in with the site.
>>17854 Oh wait, sorry, your talking about URL import. Not sure if login would affect that.
https://www.youtube.com/watch?v=OQEDWiM-QRI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing a behind the scenes overhaul and a variety of quality of life work. I was ill last week and put off the release, but I am feeling great now. metadata import/export As is often the case, an important overhaul makes few actual changes front end. I've been trying to do this for a while, but now the 'export/import tags in a neighbouring .txt file' routine now works on completely new tech. Rather than hardcoded tags into .txt files, it now uses a modular system that I will be able to expand in future to support filtering and string processing, more metadata types like ratings and URLs, JSON and XML as the file formats, and will even allow funky migrations like converting tags to URLs. As a side benefit, Export Folders now support tags .txt export. For now, all the UI front-end looks the same, but please expect this to change in future. I'll be writing some nice unified panels and dialogs to handle the new objects as I write them. advanced user highlights The 'OR*' advanced tag input now supports system predicates! It uses the same system predicate parser as the Client API, so you can now type or copy/paste most system predicate text and get something both useful and complicated. The text that parses isn't always exactly the same as the predicate label, so check out the big list of example parseable system predicates here: https://hydrusnetwork.github.io/hydrus/developer_api.html#get_files_search_files Also, if you are a parser creator, String Processing now has a Tag Filter processing step. Let's say you can grab all the tags from somewhere but you need to filter out a handful of non-tag text like '+' and '?', or you are able to create hydrus namespaced tags and want to filter by namespace, just insert this into your string processing and it should be much easier than messing around with long regexes. full list - system predicates: - the advanced OR input, where you can type tags in complicated logical expressions, now supports system predicates! most system predicates are supported using their typical display strings. it uses the same engine as the client api, so check the examples here https://hydrusnetwork.github.io/hydrus/developer_api.html#get_files_search_files sorry for the delay here - the advanced input also runs tags better through the hydrus tag 'cleaning' process, so things like whitespace between the namespace colon and the subtag are cleaned up correctly, and invalid tags should be excluded - it also starts with the keyboard focus in the text input - and I think I fixed an issue with '!'', 'not', or '-' negation prefixes not parsing - highlighted the example parseable system predicate texts in the Client API help, and added 'last viewed' to it - . - misc: - altering your services in _manage services_ no longer causes a full page refresh for all currently open search pages - in a related thing, if you click the file or tag domain of a file search page to be the same as it just was, you no longer get a page refresh - the rating widgets now show their current rating value on their tooltips - when setting a numerical rating by a drag, it no longer matters if your mouse strays above or below the widget--it will still set - the String Processing system has a new 'String Tag Filter' processing step. this applies the normal tag filtering object to your list of strings and also performs the hydrus 'tag cleaning' process on them, making them all lowercase and trimming whitespace and so on - the sibling/parent sync is now even more polite when told to do work in 'normal' time. this has been hitting a lot of new users really hard, so it should now really trickle work during normal time, throttling down when it hits a bump to avoid stunlocking you but also responding quickly to recent changes if you are fully synced - the database repair code is now better at healing damaged fast-text-search (FTS) tables. previously, in cases of partial damage to the virtual table, the repair code would error out - fixed a bug where certain search predicate calendar dates that are acceptable in Linux but not in Windows caused Windows to fail to load the session. if you put in 1965 as a search date, it should now revert to the current time one next load etc... - the test to see if a directory is writeable-to is improved and now handles Windows's Program Files directory correctly - improved how the boot scripts handle incorrect/bad database directory paths. the error handling works better, and it figures out a fallback location for crash.log better - a new button on 'review services' now lets advanced users copy the service key to the clipboard - the migrate tags dialog now lists file repositories, ipfs services, and 'all my files' as potential file filter domains - when checking it has space for a large transaction like a vacuum, hydrus now tries to check if you are running on a ramdisk or other severely space-limited temp dir and offers more text if this is true
[Expand Post]- updated the '4chan style thread api parser' to handle posts with multiple files, which fixes tvchan.moe and probably anything else running NPFchan - some logic testing around showing 'return to inbox' and the actual operation is fixed so it only applies to local files. in some weird advanced situations, you could previously send deleted files to inbox - . - new import/export framework: - started a new modular metadata import/export pipeline. this thing starts out today by doing the work of newline-separated tags in a .txt sidecar file and will expand to do all sorts of metadata in other formats like JSON and XML. it will also, eventually, support arbitrary cross-type conversions like tags to urls or ratings to tags - export folders now support '.txt' sidecar tag exporting! - the '.txt' sidecar tag importing in import folders or manual imports is now handled by the new pipeline - the '.txt' sidecar exporting in the manual export dialog is now handled by the new pipeline - please expect the UI around '.txt' sidecar importing and exporting to change significantly in future. you'll be selecting different metadata types to import or export, make string processing steps to alter or filter what you get, and of course be able to compile it all into more complicated filetypes - . - cleanup and refactoring: - mr bones gets two new columns to line up the numbers better - a bunch of export code got moved around. created a new module 'exporting', and moved ClientExporting.py to it, renaming to ClientExportingFiles.py - removed an old prototype for sidecar exporting and related plans for UI - the 'missing file folders on boot' dialog now points users to 'help my media files are broke.txt' - brushed up the 'help my x is broke.txt' documents in the database directory a little - fixed some surplus double backslashes in the help - a secret tiny label change/fix, let's see if anyone notices - cleaned up how the rating widgets manage and update rating state. it was ancient bad code - updated how different rating values are converted to UI text - misc cleanup of some free space checking code - fixed some bad quote characters in client api help JSON examples - improved some error handling for uploading pending content and sped up file uploads a little next week Next week is a cleanup week. I'll try and break up some more monolithic database code.
Nice! Thank you!!
>>17856 Welcome back OP and thanks.
I think the descriptions in the "speed and memory" options should indicate more clearly that the "in % of cache" limits are restricting based on an image's dimensions alone. Perhaps stating instead: "Maximum cache requirement (in %) of an image" - "at most a *WidthxHeight* or equivalent image".
The NOT operand seems to work again. For instance -chair seems to block any pics with the tag chair. If you want it with other tags, using - with AND seems to work. i.e red hair and -chair seems to work.
Is there a way to get subscriptions to notify you if when a download fails instead of it just silently going to the next download? maybe a message popping up with the download number and error note?
>>17838 Damn, thank you for the update. I will do some investigation to see if this really is some Qt5 issue or what the hell is going on here. >>17841 In future, yeah, select the files you want, and then right-click->known urls->copy 7 safebooru urls or whatever and then paste those URLs into a new 'urls downloader' page that you've set up with the right 'get the page anyway' tag import options. It'll skip redownloading the actual image in all these cases, by the way. It generally tries to be efficient as possible. When you say 'force page fetch', it only gets the html page (which has the tags). >>17842 Thanks. I think this is a good idea. I will search for the places this tech works and see if I can improve the workflow and add another yes/no or something.
>>17846 >>17849 >>17860 Sorry for the trouble and thanks for the report, I saw it late tuesday night. I must have broken it some time ago by accident. Let me know if you have any more problems. >>17848 This is probably tricky to do automatically, since by its nature, any hidden content is interesting and something you need a human brain to figure out and understand. What I can do is add rules so you can very carefully specify what you want deleted according to your own confidence. Maybe you only start with the uncontroversial jpg/png swap, and then later, if I develop a 'system:exif has some interesting shit!' search predicate, you can swap that in to shape what an automatic decider will or won't act on, or maybe instead of deleting a file like that, it gets sent to a different file service for you to put human eyes on later. This might also be a job for the Client API, if there are good external scanning tools. Also, while people feel differently, I think this specific subject may also a FOMO thing. I struggle with this myself a lot, but I'm trying to teach myself that with millions of files, most that I put through the archive/delete filter that are just 'ok', I am probably never going to see again in my whole life, so my saving them may end up being mostly a moot point. There's no point sweating at the thought of losing one or two things here and there, because there is always a firehose of new great things coming tomorrow, and you can't keep up with that either. >>17851 I was talking with some users about this a few weeks ago. Unfortunately it is tricky for me to support this behaviour without rejiggering a bunch of how these actions work. I will investigate to see what else I can do. Someone did mention that if you hit shift+tab several times on a page, the focus will actually go to the page tab, and Qt can handle navigation keys like left/right better than my in-built shortcuts.
>>17853 Yep, sorry, the urls downloader page can only do one page of a gallery page. That page type doesn't have all the guts of the cleverer downloader. You can bodge what you want, I think, if you open a gallery downloader page, create a dummy query using whatever text you want, pause and kill that query in the 'search log', and then (maybe with help->advanced mode on), click the dropdown arrow on 'search log' and select 'import->from clipboard'. >>17859 Thanks, I will. >>17861 They are an automatic system, so I don't like to have them spam too much at you, lest it make 1,200 popups one day. If you are downloading from a site that regularly has connection trouble, I recommend you make a weekly job to check your 'manage subscriptions' window and look for '2F' in the 'items' column, and hit 'retry failed' there. In future, I want to have a nicer domain-based error system that can handle all this better. Maybe then subscriptions will have a nice avenue to report problems on a particular domain and even come back and try again later automatically, so you won't even notice anything was ever wrong.
Is hydrus multi-threaded? I've been considering between an intel i5-12400 and i5-12600, and was wondering if the additional cores found in the 12600 will result in better performance
>>17865 theyre literally identical one just has a higher clock speed so obviously it will execute everything faster regardless of what program, they literally just tossed in a better graphics chip to justify reselling the same cpu but overclocked
Could you adjust the way that the "if one is archived archive the other" option in the duplicate merge options works so that it won't archive a file if you chose to delete that file in the duplicate filter. Because of the way it works now, I noticed a bug where if you have the "don't allow deletion of archived files" option enabled, the duplicate filter will archive the file that you marked to delete in the duplicate filter, then fail to delete the file because of that stop archive deletion option, and just silently move on, not deleting the file at all. I caught this after a week of doing a bunch of duplicate filtering stuff and... yeah I had to do a lot of looking through those files again. I like both of those options, so making the merge archiving not archive files you told the filter to delete so that the filter can properly delete it would help.
>>17866 Ah I should have been referring to the 12600K, which has additional efficiency cores
>>17868 strange chip, still only has 6 real cores, the other 4 are single thread cpus with some other architecture and low clock speed, no idea what their purpose is they look like low power phone cpus, even without them its still faster regardless of multithreading because the normal cores have a higher clock speed, also theres 2mb of extra cache in this one which does a lot to boost programs written by idiots that cant optimize, makes no difference for optimized programs though, but if its pyshit you know which category it belongs to
>>17797 Just wanted to say, I tested this shit in command prompt, too, and it's not powershell's fault, Chocolatey is just a piece of shit with broken quote parsing, incorrect documentation and no interest in learning about their own mistakes. Fuck chocolatey.
(6.40 KB 264x231 ime example.png)

>>17825 >>17827 >>17838 >>17862 Hey, I am sorry to be the bearer of bad news, but I figured out how IME works on Windows today and did not have any trouble turning it on. I think therefore that the typical Qt hooks that enable IME are working ok, and I haven't fundamentally customised anything to break that. So, with the proviso that I have never used this stuff before so I'm no expert, I think this is an error related to: A) fcitx5/kkc/mozc, or your system overall, maybe a recent update on their end that conflicts with Qt5? B) Your shortcut to turn on IME is conflicting with the hydrus shortcuts system. C) It is something I did, but for whatever reason it only affects the Linux build. Could be something esoteric like the github build environment. If you can, please test B by turning on help->debug->report modes->shortcut report mode and then trying to turn your IME on in the tag search box. You should get some popups--is it trying to match what you type to an action? On Windows, I set the shortcut to ctrl+space, it worked ok on clever and dump text inputs. For C, we might be able to test if there is a difference between the built release and the source, assuming you are running the built release. If this guy is running from source >>17829 , that may explain why he is working, and this is indeed some weird setting where 'Qt make IME work library' isn't being added for whatever reason in github. If you are also running from source (e.g. I believe the Arch package does), then the problem wouldn't be with the build, though. Last ditch possibility: I am expecting to roll out test builds in Qt6, probably in a few weeks. This will have a lot of fixes on Qt5, so maybe whatever is unhappy here gets fixed magically. Let me know how you get on!
>>17865 >>17866 >>17868 >>17869 Hydrus is on stock python, which means it has 'threads', but they all have to share the same physical core. However, when heavy math occurs, stuff like thumbnail generation, I drop down to an optimised C++ library and sometimes FFMPEG as an external exe, where there is no limit. A CPU with more cores would see faster imports if you had, say, ten different things trying to import video at the same time, but I think it would only be 5-10% faster during that heavy period. The main bottleneck in hydrus is my shitty blocking UI code, which is my job to fix. Best thing you can generally do for hydrus performance is keep your session slender (under ten million weight in the pages menu) and run the database off an SSD, which I assume you are already planning to do. >>17867 Thanks, I'll try to figure this out. It looks like the dupe filter needs a couple of better hooks to deal with the archive-lock option, as in >>17842 .
>>17871 >You should get some popups--is it trying to match what you type to an action? I don't get any popups. Instead it when I hit Super+Space, it just passes the space input through as a space, seeming to ignore the Super. I switched the shortcut to swap inputs to Ctrl+Space, and when I tried using that shortcut, it did give a popup that the shortcut was passing through and that it was "in a state to catch it" but nothing else happened. With the Ctrl+Space shortcut, it didn't pass a space character through to the text box. It just did nothing. I don't think this is related to the shortcut though, because even using the mouse to click the input button on the panel (I think it's called taskbar on windows) it still doesn't let me switch inputs when hydrus is the focused window. It's like hydrus itself is blocking the input method when it has focus. >If you are also running from source (e.g. I believe the Arch package does), then the problem wouldn't be with the build, though. I'm not running from source. I'm using the prebuilt Linux release. I'm not actually a programmer so I don't really know how to run things from source and compile software and stuff like that. >Last ditch possibility: I am expecting to roll out test builds in Qt6, probably in a few weeks. This will have a lot of fixes on Qt5, so maybe whatever is unhappy here gets fixed magically. I'll try that out when it comes if nothing else works.
I had a good week. I did a mix of background cleanup along with quality of life and other improvements, mostly for advanced users. The release should be as normal tomorrow. >>17873 Damn. Well, at least we know it isn't something stupid like the shortcut system silently eating it. You saying >It's like hydrus itself is blocking the input method when it has focus. makes me think this might be related to some weird floating window stuff I do with the autocomplete dropdown and the popup toaster, although that doesn't fully explain why text inputs in the options dialog wouldn't work either. I have a plan to rewrite all this in future, so if this is my fault, maybe I'll accidentally fix this. >I'm not actually a programmer Sorry, yeah, I forgot to say, running from source is a pain unless you are familiar with python. Only an option if we have no other ideas and we get good info that it might actually help. Please let me know how you continue to go on here.
Why do I keep getting new results under the search term "system:num file relationships: has more than 0 duplicates" most of the time my subscriptions download new files, but after I've already sorted out the duplicates using the dupe finder page? In other words there are regularly happening dupes that the dupe filter ignores, but they are searchable with a dedicated expression. Every time this happens I have to select all found dupes, manually dissolve their dupe groups, and only then the dupe filter would catch them. Kinda annoying to do that all the time.
https://www.youtube.com/watch?v=N9LFp_brHvE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Linux.-.Executable.tar.gz I had a good week mostly cleaning code and adding some things for advanced users. highlights If you are in advanced mode, the file sort and collect controls now have a 'tags' button. This lets you determine which tag service that particular sort/collect applies to. If you are this tag-clever, let me know how it works for you. This tag button is the same thing that autocomplete dropdowns use, and I expect it to soon get a 'multiple location' makeover like the file button did with multiple file services. The archived file delete lock (under options->files and trash) gets a pass this week. If you try to delete files that are currently locked, it now makes a popup with a button to see those specific files, so you can decide what to do. The duplicate filter also handles the different situations (like 'archive both files' + 'delete the worse file') better. The duplicate filter also now shows if one or both files have an ICC profile. The shortcut actions to 'move page selection home/left/right/end' now try to stay on the same level if you hit them several times. If you use these actions, try them out through a mix of page of pages and you'll see how it works now. It remembers the current level within three seconds of the last move event. This was requested by several users, and there isn't a nice way to do it so I hacked an answer, so let me know what you think. full list - sort and collect updates: - for big brain users, the collect control now has a tag domain button. it only shows if you are in advanced mode (issue #572) - the sort control also has a tag domain button hidden behind advanced mode. it applies to system:num tags and namespace sorting - the collect control now appears on all import pages - . - archived file delete lock: - the duplicate processing action code now no longer archives files that are due for deletion right before that deletion. this was hitting the archive delete lock - if archive delete lock is on and the 'other' file in the duplicate filter is archived, the option to 'this is better, delete the other' is now disabled - if you attempt to delete a delete-locked file during normal browsing, or if an automatic system like export folders wants to but fails on some, a popup is now made with a button to show the files that were filtered out so you can review the situation and fix it if you want - I am considering adding a dialog to say 'hey, this is locked, want to send back to inbox?' to fix these situations in a nice way, but I think this is probably a bad idea in terms of workflow, design, and my sanity given all the edge cases and potential future expansions of lock rules. maybe I'll add a simple 'delete and override lock checks' option, but a lock is a lock tbh. for now, I will focus on this better UI feedback of currently delete-locked files and make it simpler for humans to remove any locks - . - misc: - using black magic, I have made it so the shortcuts for 'move left/right one page' 'and 'move home/end' do not dip down to the lowest level of a neighbouring page of pages for the next command. it now stays on the current tab level for three seconds after the most recent move command. this works in testing but may be jank in some IRL situations, so if this matters to you, let me know how it works out - fixed a bug in 'do a full metadata resync' that meant unprocessed row orphans were not being deleted, which lead to lingering 1950/2000-style processed gauges that didn't actually cause any work to be done on 'process now' - the duplicate filter now shows if one or both files have an icc profile. for now the score for this is always 0, neutral - I think I have reduced general lag on some busy clients - . - code cleaning and minor fixes: - refactored file viewing stats management to a new database module - refactored file physical storage management to a new database module - cleaned up an ugly bridge that made inbox/archive work and moved it all to a clean new separate database module - improved some client file physical storage repair code, both in how it repairs and how it recovers in the current boot - updated the yes/no dialog texts when you apply 'not related' or 'alternates' to a selection - added a bunch of tooltips to the 'speed and memory' options panel. also clarified the example image sizes in number of pixels - improved how my grid layout propagates tooltips from the widget to the text when the widget is compound and in its own layout
[Expand Post]- consolidated where the delete lock test occurs to just one location for db, gui - added infrastructure to filter and report delete-locked files. callers no longer care about specific lock rules, opening this up to future expansion - cleaned and simplified some duplicate action processing code - cleaned up some file collect code, optimised it a bit too - the sort control now only changes sort type on mouse wheel events if the mouse is over that button - renamed 'tag search context' to 'tag context' across the program, mirroring a recent change with the location context, and gave it some bells and whistles. in future, the tag context will hold multiple tag services - wrote a new button to edit tag contexts next week Next week is small jobs. I have a bunch of different things piled up that I want to get to, and I'll see if I can catch up with some longer term bug reports too.
Just installed this for the first time and added the public tag repo. Its taking a really long time to process the tags. Is that normal? Any way to speed it up?
>>17848 >if it ever comes to a "these files are the same" I always, without any hesitation, go for the smaller file because there is no reason not to at that point. i can think of one reason: tag importing from other sites. for example, imagine you download a file directly from pixiv. then later you download a pixel-for-pixel duplicate of the file from danbooru, which is a slightly smaller filesize but hasn't been tagged very well. in the duplicate filter, you delete the slightly larger filesize one from pixiv. then later, the pixiv file is uploaded to gelbooru, and is tagged properly. you try to import from gelbooru file, but since you've previously deleted it, it gets ignored. none of the tags from gelbooru are added to the file, since it's deleted. and the pixel-for-pixel duplicate from danbooru is still barely tagged. so in some cases it's better to keep a larger file if it has a better source that is more likely to propagate to other websites. alternatively, maybe there could be an option for downloading tags even if the file has been previously deleted, and an option for "sharing" tags between files that have been detected to be pixel-for-pixel duplicates? probably unnecessary.
Is there a way to make an import folder automatically convert video files to another format before importing? Like mkv to mp4?
>>17834 Thanks for your response. I don't have any font settings in options->style. I tried changing my system fonts but it doesn't look like Hydrus uses any of those. And as you expected, kanji and stuff shows fine (Korean too). Just smileys don't. I tried to change the font in a .qss file but that didn't do anything, probably cuz I have no idea what I'm doing there.
>>17788 >If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. But isn't this vulnerable to the "intransitivity of indifference" issue? Shouldn't hydrus keep track of same-quality relationships as long as the files are tied for king so that a noticeably worse file doesn't end up accidentally becoming king due to bad filter pairings? Like, it might not find pairs that are actually better/worse pairs because the better file got treated by hydrus as a worse dupe in a "same-quality" decision earlier in the filter, and this can lead to the wrong file becoming king. I originally wrote a much longer reply trying to explain what I mean because I feel like what I said could be confusing, but that would've probably just been more confusing. I don't know what the guy you responded to wanted, but the adjustment I'm talking about is only concerned with preserving "same quality" relationships between files that are tied for king, since that's the only time it matters. So not exactly like the old duplicate system that you got rid of. I get why that one's gone.
When I watch a thread to import it into a file service , Y, it doesn't import files deleted from file service X.
Is there an option to just throw all files down a page of pages into a new page?
Is there a way to "forget" all deleted files? So that they don't interfere with "exclude previously deleted files" anymore
>>17729 >If you really really need this record removed Nah, while I would rather have them removed, it's not a huge issue. Besides, there's a ton of files I'd like the hashes removed, and I'm not even sure what they are anymore because I no longer have the original files. >Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. I know it's most likely a long ways away, but do you have a rough timeline of when you would like to get started on this?
>>17875 I am sorry, I am not sure I totally understand your problem. Are you finding that the '>0 dupes' system predicate is actually getting new results after your subscriptions finish running, or is it only when you do some duplicate filtering that you get more results? If it is the latter, then this is what I would expect. If you say 'these files are dupes, this is better quality' and so on, then those are duplicate relationships. If you mean the 'potentials', those that the duplicate finder page searches through in order to populate the filter, that is searched with the same 'system:file relationships' predicate, but uses 'potential duplicates'. A downloader can definitely add new potentials, but then your work in the filter will recognise those potentials into proper duplicates. Maybe you can walk me through a typical session with a concrete example and you can help me figure out what you are trying to do when you dissolve the groups and so on. Normally I would say that a group dissolve is a rare job, something you would do in an awkward maintenance situation where you were trying to undo a problem. >>17877 Yep, it has about ten years worth of uploads on it, and it has to do some heavy CPU on all that stuff. It is best left to work in the background, so if you can possibly help it, just leave it alone and it will do little bits of work here and there if you leave the client on, or bursts when you shut it down otherwise, and you'll soon see some tags appear. I'd estimate it takes a couple months of background work to fully catch up, and thereafter five or ten minutes a day. If you are worried it is processing too slow, a typical decent speed on an SSD is 3,000-20,000 rows/s, and on an HDD you get 100-1,000. You should not try to sync to the PTR on an HDD, it is too big these days. Let me know if you are getting too slow on an SSD. You can double-check these numbers over the long term by searching your log file at install_dir/db/client - date.log. >>17879 Not in hydrus, so you'll want to set up your own external script that converts from a staging area to another folder and then moves the finished product into your hydrus import folder. Bear in mind that you don't want to convert too many videos, and certainly not images, if you sync with the PTR, as conversion changes the files' hashes, which means they won't get tags on the PTR.
>>17880 Shame. At the moment, yeah, there are no font options in the actual style page, and everything has to go through ugly QSS. I hope to have some better options here in future. I'm hoping to roll out a Qt6 version of hydrus in the coming weeks. Please give it a go when convenient and let me know if it helps anything here. >>17881 I hadn't heard of 'intransitivity of indifference' before, that sounds like an interesting problem that might come up here from time to time. My general priority for duplicates is currently in improving processing time. I don't mean to repeat myself, but I'll say my original system was very complicated, remembered every decision perfectly, and presented every file within a group against all possible combinations to try to get the genuine best version. Unfortunately, it proved way too complicated to maintain and just caused frustration from all the very small choices I was throwing at users. All this was avalanched by the similar files detector, which works pretty good and gives most users tens or hundreds of thousands of potential duplicates, even at 'exact match' distance. The problem is that we have too many pairs to go through and not enough time to get through them. So, while it may be true that if you have a set of very similar dupes that nonetheless have a clear best/worst when you compare the best and worst, I am not sure I can justify making the system complicated enough to try to handle that. I presume I'd have to keep track of all the current pair- or triplet- kings and then any time that group was compared to another file, I'd need to compare that file against all the brother-kings and offer UI for the user to select somehow which was the true king, now they were all arrayed. It just seems like too much when what I really should be focusing on is an automatic system to detect shitty jpeg quality and png versions of jpegs and throwing that in front of the user, since that's most of our problem space. But I'll keep this in mind. Let me know if you have any more thoughts on this topic. >>17882 Thanks. Yeah, I guess this is true. The 'exclude previously deleted files' test applies to any file in the trash or removed from trash, so it'll do this. How can I make this work better for you without turning off the feature entirely? Maybe in 'file import options', where it says 'exclude previously deleted files', I can add another checkbox like 'only if they were deleted from the destination file service?' something like that?
>>17887 >I'm hoping to roll out a Qt6 version of hydrus in the coming weeks. Not the one you replied to, but will there be both Qt 5 and 6 versions out for a while, or is it going straight to Qt 6 only? If there are two versions, how will this affect the AUR?
>>17883 Not sure. There's a few admin/meta commands if you right-click on a media page or page of pages tab, like 'send pages to the right down to a new page of pages', but do you want something specifically for files, like 'vacuum up all the files inside here and put them in one new page'? What would be convenient shapes for this action, for you? Maybe if I made it work on a row, so all the pages in a row, it sucks up all the files and adds them to one page at the end of the row? Or would you rather it worked like a tree, on all the levels down inside a page of pages, and put them on a new page beside that page of pages tab? Or something else? >>17884 Yes, under review services->all local files (you might need help->advanced mode on to see this). 'clear deleted files record'. Make sure you make a backup before you try this! It may not work how you imagined, so make a backup, and if it all shits up your workflow on next boot, or if it simply breaks, roll back to the backup. >>17885 I'm chipping away at the database module refactoring. I did a bit more in last week's cleanup. Basically my new modules have a function where they say 'hey, I am responsible for three tables, and those tables have file definitions here and here', and then I will build a maintenance routine that will check items in the master file table and be able to check every table in the database to check if it is now an orphan, and also some maintenance UI so humans can see too where they still have definitions lingering for particular files (like the deleted files table). The modules also do the modern repair work when a database boots with missing tables. So, ClientDB.py is down to 610KB, after starting at 980KB or something last year(?). I figure I probably have twelve or fifteen modules still to spin out of it, and then, when I have 100% coverage, I can start working with it and add new structure again. The whole directory is now 1.29MB just of database code...
>>17888 Yes, now sure how long, but I want to ease into Qt6, so we'll figure out simultaneous builds for a while and ask advanced users to try out 6 so we can hammer out problems. Maybe two months(?), I am not sure. Qt6 will break Windows 7 support, maybe some macOS stuff too, and doubtless it will be a festival of violence on the weirder Linux Window Managers. If the test all fucks up everywhere, I'm totally willing to put it off another year. But it does have a ton of bug fixes, so I do want to go up if possible. For some reason, the Qt guys locked all the backported Qt5 patches behind an enterprise paywall, so everyone on 5 is like two years behind. Also, fingers-crossed, there won't need to be many/any actual code changes, so if the AUR guys want to stay on 5, it should be possible just by staying on their current build script, and if I need to change three lines to allow this, that's no problem at all. Anyone who wants to stay on 5 can run from source for as long as 5 holds out.
hello hydrus anons! how do i go about getting started with stuff like tagging when I have 50,000 images imported?
>>17889 >but do you want something specifically for files, like 'vacuum up all the files inside here and put them in one new page'? I meant something like "open image in a new page" but recursive for pages of pages.
I had to do a database recovery from a failing disk. I followed the instructions from the various text files to recover/regenerate my databases, and I managed to recover most of the data. I'm still seeing some problems I'm unsure how to debug, though. I'm getting a DBException on start that I can ignore for the most part, but the same exception shows up sometimes when I do a search which stops the client from completing it. If I restrict the search it'll sometimes go away, but if I make it more generic it'll also sometimes go away (e.g. A and B will cause the exception, but neither A nor A, B, and C will cause it). I'm not sure if it's related, but the numbers in the PTR service are off, too. It'll say that the "client is caught up to service and can upload content," but it shows the definitions, mappings, tag parents, and tag siblings as being only ~98% complete (4820/4888 on definitions at the time of writing) even though when I click "process now" nothing happens. I'm still getting stuff from the PTR as when the day rolls over it'll download the new update and process it as normal. v492, linux, frozen DBException DataMissing: Did not find all entries for those hash ids! Traceback (most recent call last): File "hydrus/core/HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus/client/gui/pages/ClientGUIManagement.py", line 5344, in THREADDoQuery more_media_results = controller.Read( 'media_results_from_ids', sub_query_hash_ids ) File "hydrus/core/HydrusController.py", line 684, in Read return self._Read( action, *args, **kwargs ) File "hydrus/core/HydrusController.py", line 200, in _Read result = self.db.Read( action, *args, **kwargs ) File "hydrus/core/HydrusDB.py", line 927, in Read return job.GetResult() File "hydrus/core/HydrusData.py", line 2057, in GetResult raise e hydrus.core.HydrusExceptions.DBException: DataMissing: Did not find all entries for those hash ids! Database Traceback (most recent call last): File "hydrus/core/HydrusDB.py", line 610, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 7873, in _Read elif action == 'media_results_from_ids': result = self._GetMediaResults( *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 4994, in _GetMediaResults missing_hash_ids_to_hashes = self.modules_hashes_local_cache.GetHashIdsToHashes( hash_ids = missing_hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 178, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 80, in _PopulateHashIdsToHashesCache hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = uncached_hash_ids ) File "hydrus/client/db/ClientDBMaster.py", line 274, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids, exception_on_error = True ) File "hydrus/client/db/ClientDBMaster.py", line 94, in _PopulateHashIdsToHashesCache raise HydrusExceptions.DataMissing( 'Did not find all entries for those hash ids!' ) hydrus.core.HydrusExceptions.DataMissing: Did not find all entries for those hash ids! Database Traceback (most recent call last): File "hydrus/core/HydrusDB.py", line 610, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 7873, in _Read elif action == 'media_results_from_ids': result = self._GetMediaResults( *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 4994, in _GetMediaResults missing_hash_ids_to_hashes = self.modules_hashes_local_cache.GetHashIdsToHashes( hash_ids = missing_hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 178, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 80, in _PopulateHashIdsToHashesCache
[Expand Post] hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = uncached_hash_ids ) File "hydrus/client/db/ClientDBMaster.py", line 274, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids, exception_on_error = True ) File "hydrus/client/db/ClientDBMaster.py", line 94, in _PopulateHashIdsToHashesCache raise HydrusExceptions.DataMissing( 'Did not find all entries for those hash ids!' ) hydrus.core.HydrusExceptions.DataMissing: Did not find all entries for those hash ids!
I have a parser that generates multiple urls with different priorities. The second url always exists but the first one is better for the files that are in the server. Is there an option for downloaders to check for alternate urls if the first one fails for some reason (e.g. 404)?
>>17875 >>17886 I'm just asking why is there a situation when the dupe filter sees no more dupes, but they can be found by that system search expression. What can I do to avoid that? My setup is rather complicated as I'm using Hydrus for managing albums and posts in a facebook-like website, I'm not sure I'll be able to explain all specifics here. But basically, those special dupes happen when the site changes the URL schemes to images, so they were downloaded by old-style URL and then Hydrus encounters them again by a different URL, but the MD5 also don't match, because of new compression methods that the site owners constantly introduce. Otherwise images look exactly the same. I guess the different urls confuse Hydrus, but I still can't understand why the dedicated dupe filter won't see the obviously duplicating images without dissolving their dupe groups after I found them manually using the >0 search expression from a regular "files" tab.
Getting this error when manually setting multiple images as alternates. v492, win32, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus\client\gui\ClientGUIShortcuts.py", line 1257, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1197, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
Is there any way to change the default position of the manage tag window? It pretty annoying to have to move it every time I tag something because as blocks the preview window.
could you add a new sort option for the tag list that sorts them by relative prominence as an alternative to the absolute "count" sort that we have now? by this I mean that the sort would sort tags based on how commonly they appear on the current page vs how commonly they appear overall in your database. This would help make it so that universally common tags like "female" for example wouldn't always be at the top of the list and instead you'd get a list of the tags that are more uniquely common the page you're looking at compared to in general. So a tag that appear in 1% of all your files, but 2% of the files on this page would be sorted highly because it appears on twice the percentage of files here than it does overall. I hope something like that wouldn't be complicated to implement because it would be really useful to me, even though I know it sounds like a minor thing.
>>17889 >I figure I probably have twelve or fifteen modules still to spin out of it, and then, when I have 100% coverage, I can start working with it and add new structure again. Nice! So this feature isn't too far off?
How to make Hydrus download every link in a text file? ex: $ cat test.txt https://danbooru.donmai.us/posts/5537292 https://danbooru.donmai.us/posts/5537294 test.txt contains almost 100k links so pasting them into the download box is not an option.
does deleting the file log for subscriptions make hydrus try to start all over again, or is that log separate for hydrus's "memory" of what sub files it already downloaded? Is it safe to the log every once in a while to clear it out?
Hi devanon, with the latest update (this problem might have existed for earlier versions, but I had not noticed then), I get an error when attempting to set images as alternates in the preview page, either through the right click menu or a keyboard shortcut. The error message is as follows: v492, win32, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus\client\gui\ClientGUIMenus.py", line 213, in event_callable callable( *args, **kwargs ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
>>17900 Have you considered using curl and xargs with the `/add_urls/add_url` POST API endpoint? I'd do something like: cat test.txt | xargs -I{} curl -XPOST -H 'Hydrus-Client-API-Access-Key: YOURAPIKEYGOESHERE' --json '{"url": "{}"}' http://127.0.0.1:45869/add_urls/add_url The above is untested, so please test on a small subset of your file before trying this on 100k links. Also you need curl 7.82.0 or up to use the --json option; you could also do it with --data and a few other options if needed. I would also add the optional `destination_page_name` to the json object to get a new page, but it'd work without.
trying to use the "set selected files as alternates" action (both through the shortcut and through the context menu) gives this error. this must be a new bug because this didn't happen on the last version. v492, linux, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus/client/gui/ClientGUIShortcuts.py", line 1257, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1197, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/pages/ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus/client/gui/pages/ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
>>17903 Thank you very much! I had no idea Hydrus had an API.
I had a good week. I fixed some bugs, cleared out some jank, and wrote a prototype EXIF data viewer. The release should be as normal tomorrow. >>17896 >>17902 >>17904 Thanks, sorry for the trouble, this was a stupid typo and is fixed tomorrow!
>>17898 I second this, and have an additional idea. Sometimes when there are too few results to collect enough statistical data for sorting, it becomes impossible to order tags by their incidence properly (for example you might have 10 tags each having 3 images with them, no way to order those among themselves by incidence). But if you use their popularity in the whole collection in addition to current selection, then such a group can still be sorted in a meaningful way.
https://www.youtube.com/watch?v=5VygwTBgph4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Linux.-.Executable.tar.gz I had a great week working on some fixes and prototype EXIF support. EXIF EXIF are metadata labels embedded in some media files, usually JPEG. They might say the make of camera that took a picture, or its aperture/ISO, the GPS coordinates it was taken at, how an image is rotated, or the DPI of a logo for printing purposes. There are many different possible fields. I have wanted to add support to view and even search this data for a long time, and this week we start with something simple but not user-friendly. The media viewer now has a 'cog' icon on the top hover. On JPEGs, it lets you check for EXIF data. Most files don't have it, but if it does (usually this is photos or professional art exported from Photoshop etc...), it now throws up a little window to see every field. The duplicate filter now actively scans for EXIF data and says if one or both files have it, just like the recent addition for ICC Profiles. Many websites strip EXIF data on upload, so if you have two exact dupes, the one with EXIF data is probably closer to the 'original' version. Now I have this framework, I would like to extend it. Beyond general polish like replacing the cog icon with something nicer, and only enable it when I know there is some EXIF to show, I want to cache 'has exif data yes/no' in the database and allow you to search by that. I expect I'll also add the actual EXIF data itself to the database one day, so you'll be able to search all your pictures for iPhone 6 photos or whatever. So, if you are interested in EXIF, please give this a go and let me know what you think. This feature was taking so long to happen that I decided to just spam out a rough v1.0 that I can keep improving. full list - EXIF: - in the first step of 'official' EXIF support, the media viewer now has a 'cog' button on the top hover, enabled when looking at a jpeg, that will check the file for EXIF data. if found, it will throw it up on a simple new window that shows EXIF id, label, and value. this is a hacked-together prototype, not super user-friendly, but it works. let me know what you think, and please send me any files that have weird EXIF that doesn't parse right but you think should. I already discovered a file with a null character that wouldn't display in UI, that sort of thing - GPS EXIF values are also parsed and extracted - made it so you can double-click a row in this new window to copy an EXIF value to clipboard - in the duplicate filter, if one or both files have exif data, this is now noted in the comparison statements, just like ICC profile! (issue #469) - obvious future extensions here will be storing 'has exif' in the database and allowing its presence to be searchable and enabling the cog button (or a nicer 'exif' button) only when there is known data to see. a subsequent step would be actually caching the data in the database for full EXIF search - as a side thing, we're now set up on the hydrus end to pull TIFF EXIF, but PIL doesn't seem to offer it, so we'll have to wait for a different solution there - . - fixes and misc: - fixed a problem that made saved page file sorts reset their sort order one time on update to v492. thank you to a user for noticing this and discovering the fix, and I'm very sorry for the inconvenience of changing your session and favourite search sorts. unfortunately there is no easy fix other than rolling back to a backup and jumping forward to this version - fixed a v492 message display error when setting various duplicate relationships to three or more thumbnails at once. it was a stupid typo, sorry for the trouble! (issue #1199) - if a page tab name elides to a 'shorter...' length, it now has its full name as the tooltip - fixed a typo in update code error handling (issue #1192) - the duplicate filter page now remembers if you are 'searching immediately'/'search paused' (issue #1193) - if you are on non-Windows and export files manually or with an export folder to an NTFS or exFAT partition, this is now detected, and NTFS-invalid characters in the pattern-generated folders or filename are now replaced with underscores (issue #1194) - 'fixed' a system predicate bug in the 'OR*' advanced predicate parser--entering a logical expression that results in a negated system tag now causes an error. previously, it would strip the 'system:' and just enter the given text as an unnamespaced tag. furthermore, that dialog now reports specific error reasons when it fails to parse. I hope to improve support for negated system tags in future--some stuff, like archive/inbox, should be easy. - I think I fixed an instance where the archive/delete filter's confirmation dialog could present 'delete from hard disk' as an option when it wasn't appropriate - in an attempt to reduce the media-change flickering we've recently seen in the media viewer, I untangled a bunch of the canvas size/position code this week. I'm preparing a complete overhaul and neat Qt layout integration, which this starts. I _think_ I've made some things less flickery on occasion, but we'll see IRL. much more to do - added a '--profile_mode' launch argument, which allows you to capture the performance of boot and also try out profile mode on the server (although support there is very limited atm) next week Next week is a medium size job week. I want to put some time into note parsing. I am not sure how far I can get, but fingers crossed I can actually get 'note import options' and a note content parser working, and we'll be able to update the existing downloaders to grab artist notes and things.
I know this is probably something brought up a lot, but I think hydrus would benefit greatly from having actual collection items. The idea against collections is that hydrus works on individual items, and that any valid collection can just be made by filtering for the things the files have in common, like a post id or a manually set collection tag. I don't think this is really equivalent to real collections, though. There are times when you want to consider a collection as an item in and of itself, with its own tags and single link to all its contents. Not only that, but doing things the current way way is awkward because it forces you to adapt your sort rules whenever you want to care about collections. But more importantly because collections can be implemented without breaking those design ideas. In this system, collections are individual items. They are added as independent objects and can then have files assigned to them. A collection claiming files as its children does not consume or group the files, just references them as being children of the collection. The collection item itself would be treated as a file-like item with its own tags & info. Only instead of being a real file to view, it acts as a link to all of its contents. For thumbnail purposes it could maybe let you set a certain child to act as the thumbnail. Actually "using" the collection could work a number of ways, but one way would be for it to open a page containing all of its children, with a saved search / sort rule applied to the page. Or it could open a viewer for all of its files with a custom sort. The collection being an item rather than just something like a 'set:xxxx' tag makes it so you can filter for & see them whenever they match a search they should match. Any information relevant to the collection as a whole can apply to the collection item alone, you no longer need to pollute members of a set with tags about the whole thing that may not apply to individual items. As for the children, they would still be treated as just normal files, and wouldn't be modified besides maybe being able to see what collections are claiming a file. There is no loss of information with adding them to a collection, they will still match any searches they should as individual items. I don't know about the technical implementation but this system seems to me like it would be able to add a proper collection feature while not disturbing hydrus' design, and it could be made easy to use with a right click > 'collect files' action. You could also have settings to hide collections or to hide collection children, for situations where you wanted to only see one and not the other.
When trying to download files from 8chan.moe, I get this error message sometimes: ("read error: Error([('SSL routines', '', 'unexpected eof while reading')])",)… (Copy note to see full error) Traceback (most recent call last): File "urllib3\contrib\pyopenssl.py", line 313, in recv_into File "OpenSSL\SSL.py", line 1897, in recv_into File "OpenSSL\SSL.py", line 1700, in _raise_ssl_error File "OpenSSL\_util.py", line 55, in exception_from_error_queue OpenSSL.SSL.Error: [('SSL routines', '', 'unexpected eof while reading')] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\importing\ClientImportFileSeeds.py", line 1406, in WorkOnURL self.DownloadAndImportRawFile( file_url, file_import_options, network_job_factory, network_job_presentation_context_factory, status_hook, file_seed_cache = file_seed_cache ) File "hydrus\client\importing\ClientImportFileSeeds.py", line 433, in DownloadAndImportRawFile network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1885, in WaitUntilDone raise self._error_exception File "hydrus\client\networking\ClientNetworkingJobs.py", line 1505, in Start more_to_download = self._ReadResponse( response, stream_dest ) File "hydrus\client\networking\ClientNetworkingJobs.py", line 512, in _ReadResponse for chunk in response.iter_content( chunk_size = 65536 ): File "requests\models.py", line 751, in generate File "urllib3\response.py", line 575, in stream File "urllib3\response.py", line 518, in read File "http\client.py", line 455, in read File "http\client.py", line 499, in readinto File "socket.py", line 669, in readinto File "urllib3\contrib\pyopenssl.py", line 332, in recv_into ssl.SSLError: ("read error: Error([('SSL routines', '', 'unexpected eof while reading')])",)
Is there a hydrus client that allows tag editing? So far I've only seen readonly booru implementations, but I need a way to let other people help me with tagging my large collection.
>>17891 If you are willing to invest the resources (about 50GB of SSD space, some CPU), your best bet is to sync with the PTR. Once you become a power user with getting on 100k files, your best shot to get tags is to get the 1.6 billion tags from that. More info here: https://hydrusnetwork.github.io/hydrus/PTR.html It is an investment though. If you don't want to get the PTR, you should get started with the downloaders. Redownloading files you already have isn't as wasteful as you think, as the client can often skip the actual file download, or at least only has to do it one time, and then it can grab all the tags and link you up with known URLs and stuff too. Have a play around with one of your favourite creators on a booru you like, see if it gets lots of 'already in db' results and adds some tags. Note if your 50k files were converted at any time, like with a batch optimising/resizing program, then hydrus will have a lot more trouble getting tags for them. It relies on files' content not changing even a single byte to line up tags from shared sources. In this case, I still think it is worth getting started with downloaders and grabbing the 'canonical' versions of files, since you'll be able to de-duplicate your dupes in future. If your files are family photos or other personal stuff that isn't tagged online, you're mostly out of luck and will have to tag them yourself. Figure out a workflow and get grinding, good luck! >>17892 Thanks. I think this is doable. >>17893 Thanks. I think you are mostly good here, even though it sucks for now. Please turn on help->advanced mode and then open review services and go to the PTR tab. Hit reset processing->fill in definition gaps, and tell it to reprocess. That should fix your missing hash id problems. You might like to do the 'content gaps' too, once definition is done. Both jobs should be fairly fast, but they'll slow down when they hit a gap to fill. That 4820/4888 problem I think I fixed recently. Hit reset downloading->do a full metadata resync on the same panel, then 'refresh account' to push it along. Should fix you up. Actually, do that before the 'reset processing' stuff.
>>17894 Unfortunately there is not. We have run into this before though, I think with DA, and I want to have this tech in when I next do a big iteration on the downloader system. There is no good solution for now. >>17895 I'm afraid I think you have confused the 'dupe' term on two different things in hydrus. 'potential duplicate' = hydrus has scanned your files and found two that look similar 'duplicate' = you have said that two potential dupes are indeed dupes, this permanent relationship is saved to the database The duplicate filter searches your 'potential duplicates', the 'system:file relationships >0 dupes' searches your actual 'duplicates'. The duplicate filter converts potential duplicates into normal duplicates. If you process your potential dupes in the filter until you have 0 left, you are going to be creating some dupes. This is normal. No worries if you can't explain the specifics on your end. Let me know if I can be any more clear about anything. >>17897 Yeah, hit up options->gui. The 'frame locations' table is hellish ugly, but if you edit the values for 'manage_tags_dialog' (thumbs) and 'manage_tags_frame' (media viewer), you can make it inherit the parent's position or have a fixed position or whatever. Should be able to save its last position too.
>>17898 >>17907 Thanks. This is a really smart idea, and I like it. I am not ready to do it, as the lag and complexity of fetching this information live during sorting would make it not feasible, but in future I am expecting to update my basic tag object from the current 'string of text' to a full object that will have all sorts of metadata hung on it. At that point, I could inform all tags on the point of loading of their basic incidence and then sort it live with that cached data. I'll keep this in mind, as there are probably some more optimal ways of figuring it out and I may be able to hack in a way to do it earlier. Might be I could just store the top 500 tags in memory and delist them a bit to stop them crowding things all the time. I was working with a user a little while ago on improving the stats behind 'related' tags in the manage tags window, so I'm going to be thinking about this again soonish. >>17899 Let's say 1-4 modules every month or so, as I do one cleanup week every four and most cleanup weeks are spent at least some on db cleanup, so I think the database cleanup will be done early next year. Then I'll be in a position to start for real on definition recycling tech. I hate estimates though. It sucks, but most of the time they are about 30-50% of the actual time needed. This cleanup thing is a slow background grind, too, something I do in 'off' time. >>17900 >>17905 You can also copy the URLs to clipboard and import by clicking the arrow next to 'file log' on your downloader page and then going 'import new sources->from clipboard'. Or just opening a 'urls' downloader page and clicking the paste button. API is the way though if you have a lot though. Your 100k in one page should be ok, but it might lag some things out a bit, so you might want to click the file log arrow again and 'delete x successful from the queue' when you are done with them to keep things lean.
>>17901 Yeah, clearing the file log would cause a sub to resync I think. I'd recommend you stay away from editing it. Subs automatically clip their file logs to about 200 recent URLs regularly, and they do so cleverly so it won't mess with their check times and stuff, so you don't have to do anything to keep them clear. >>17910 Thank you. That's basically a timed-out connection I think. I'll see if I can improve the handling of this error. It can presumably be retried a couple times and should be reported better too. >>17911 Not yet. Advanced users could try to set up their own Tag Repository, but it would be a lot of work.
>>17909 I agree completely, especially the idea of the 'virtual' collection that has files in it but is its own taggable thing. There are several things holding this up, mostly shared by CBZ/CBR support. Stuff like: 'what is a multi-page file? how is it stored and searched in the database? how do we recognise it?' 'what does fixed multi-file file order look like? do we cache/track that in any way?' 'how to show and browse a multi-page file in the media viewer? do we support bookmarks, or at least page position transitions between preview and media viewer?' 'how do we deal with the individual files inside a multi-page file? can the user tag them individually? can we show the collection as a navigable link when seeing the individual file?' Once I have those and similar tech questions answered for CBZs, I think it won't be nearly so difficult to imagine tacking a virtual CBZ system on top. I regret how bad hydrus has been at this all these years, and I really want to have two-page mangas and stuff tied together, or progressions where you have a character changing clothes or something over a series. 'File alternates', a planned extension of the file duplicate system, will also share some of this tech, particularly fixed file ordering for things like WIPs. So it'll wait for when I am working on CBZ I think, it will be a ton of work unfortunately since it goes into core systems all over the place. It'll be some time, probably 2024 at the 'everything went right' earliest. 'Comic support' in this priority list: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
Hey guys, any mass downloader for hentai manga out there comparable to Hydrus? I have yet to find any that actually work. Thanks.
1.How to create custom namespaces and give them their own distinct color like 'creator:' or 'series:' ? I don't want all custom namespaces to have the same color. 2.I also want to rename the 'creator:' namespace to 'artist:', any way to do that? 3.How to I rename a tag?
Made a pictoa parser. No gug because the urls are a clusterfuck, it just parses the page with simple url downloader.
Is there a way to open a collection sequentially with an external image viewer?
>>17917 Try Hakuneko
>>17913 Regarding the dupes, when I finish resolving them using the duplicate finder, I always use the "this file is better and remove the other", so there shouldn't be any dupes left after I finish working there. Moreover, the dupes found by the search expression never even appeared in the duplicate finder, I'd remember seeing the same images again. They happen independently of the dupe finder. The only way they'll get there is when I apply the dissolving of groups to them.
>>17921 Thanks! I'll try that one.
>>>/v/658001 I was just on the edge of beginning to use hydrus, but then I was going through the beginning user guide and came across a couple issues. >Anything that needs a filename for purposes outside of filesearching >Anything that needs to be ordered Why must hydrus replace a filename with the file's hash? Is it that technically cumbersome for it to just check the hashes directly when it needs them? If it didn't, that would solve pretty much both of my main issues with the program. Filenames could still be used effectively for the purposes other than searching that many anons enjoy, and importing a folder of sequential images, and then displaying them sequentially is as simple as tagging the whole folder with tag specific to the set, and sorting alphanumerically by filename when searching the set tag. The latter instead of, what I presume, is tagging the whole folder containing the set with a set specific tag attached to the pages function, and then going through each image, displayed out of order since hydrus doesn't have any way to order them, and manually numbering them with the pages function? >>17617 >Hydrus sucks at organizing files that are meant to be a sequential series. I was afraid of this. It seems like there's some "pages" function everyone is talking about, yet this is still an issue.
>>17924 I've been talking with an anon in the other thread, and collections likely solve the issue of sequential images well enough for me. As for filenames, I can import files while automatically tagging them as their filenames under the namespace "filename", and I can export them to a folder and restore their filenames if I choose [filename] for the export under the Filenames option. However, neither he nor I can find an option make all manner of exports default to using this function. What I'd ideally like is to be able to set export to default to essentially restoring the original filenames, so when I export via drag and drop to upload files to any site that displays filenames such as this one and most other imageboards, the filename is in tact for other anons to read, whether it contains a joke, pixiv ID, or other informative info.
One of my pages is showed trashed files that aren't actually in my trash, and I can't delete these trashed files. The thumbnails appear red with the hydrus logo
>>17926 nevermind, didn't notice it was a query page. Just had to click the query twice to refresh the page to make the deleted files disappear.
(29.52 KB 405x267 hydrus tags.JPG)

>>17924 I use this regex (?<=\\)(.*?)(?=\\) It will take the folder structure as parse it as tags, that way I can filter by folder and sort by title tag. For sadpanda stuff I use Lanraragi
>>17928 >sort by title tag. Neat. But I still need a way to smoothly export via drag and drop while restoring filenames for the purpose of quickly posting the images now that Hydrus lets me quickly find them. If it needs to save them first somewhere else in some temp folder that deletes the files as soon as the next set of drag and drop exports is attempted, so be it. I want to skip the step of opening the export menu, entering [filename] for the export, saving in a folder, then opening that folder, then posting the images, every time I want post anything with a filename. If I can't do this, Hydrus is pretty much a no-go for me.
>>17929 I'd love that, in file>options>gui there are options for that behavior in discord, but it never works.
>>17918 1. Under file>options>tag presentation 2. No idea, sorry 3. You could sibling it, right click tag>siblings>add siblings, or you could just select every instance of a tag and remove it while adding your replacement tag.
>>17924 >>17925 >>17929 Disregard this, I suck cocks. Hydrus great, Hydrus is merciful, All hail Hydrus! >>>/v/659663 >>17930 The dicksword drag and drop export options actually just apply to all drag and drop exports and should be renamed. If it's not working, try selecting the checkbox for the drag and drop bugfix.
I had a great week with two big changes. First, there are Qt6 test builds for advanced users to try out, and second, in prep for a soon-to-come Note Import Options, I added filetype filtering and 'use default at time of import' tech to File Import Options. The release should be as normal tomorrow.
>>17933 hey check >>17932
>>17932 Fuck I had the filename pattern wrong, thnk anon
https://www.youtube.com/watch?v=85bvcndvEpo windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt6.-.Extract.only.zip Qt5 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt5.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Linux.Qt6.-.Executable.tar.gz I had a great week with two big changes. First, there is an optional 'Qt6' version of the program for advanced users to try out, and second, File Import Options has some important updates. Qt5 and Qt6 If you are a regular user, stick with the Qt5 versions of the release this week. It is the same as before. Hydrus is moving up to a new version of its UI library, Qt. The new version has a ton of bug fixes and generally better support for newer OS concepts like UI scaling. I am going to be putting out releases for both 5 and 6 for a month or two, testing it with advanced and then normal users, and then will switch to 6 exclusively. Everything seems to be going well, and you don't have to do anything. If you are an advanced user though, please try out the Qt6 builds. They work exactly the same as the old ones. Just to be careful, I recommend you not try them on your real database first off, and doubly so if you do not have a great backup. I am not worried about database damage, but you never know, and if there are problems, I don't want to give you inconvenience on your main install. Try a fresh extract on your desktop first to make sure it boots ok, and then delete that extract. Then, if you want to try it on your real database, try doing a 'clean install' as here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs If you don't do a clean install, you'll still have Qt5 dlls in your install folder and the client will default to the older version (for now). If you are a macOS user, you don't have the concept of 'clean install', so just run the App as normal, but make sure you have a backup of your database first. There's also no Windows Qt6 installer yet. You can check help->about to see what version of Qt is currently running. So far, the update has been remarkably smooth for me, with very few bugs. A user has been watching the situation for me and kindly provided a patch to deal with the most important syntax changes, so moving over has not been a massive pain in the neck. I've been using it IRL for a few days now and I think things are just that bit smoother and less flickery. I am particularly interested in Linux and macOS users' feedback. So far, the main limitation I know about is that Windows 7 can't run Qt6 (it is just too old), but there may be other issues in other platforms. Let me know, and we'll see if we can iron them out. I am going to keep hydrus Qt5 compatible, so anyone who needs to stay on it but wants to keep updating will have the option of running from source. file import options My other plan for this week was getting 'note parsing' working for the downloader. I laid out everything I would have to do, and the first bulky thing I have to do is get 'Note Import Options' integrated. That will be a surprising amount of work, and over some ugly areas, so I decided to take things a little slower and do some cleanup beforehand so I could do it properly. So, this week I overhauled some of how File Import Options works. There is some behind the scenes work to make all the import options work a little nicer, and also, on the front end, File Import Options now does two new things: First, File Import Options now has a filetype filter. You can say 'only allow jpegs' or 'only allow video' or whatever you like, just like the 'system:filetype' search predicate. Import Folders used to have this too, but they send those settings down to their File Import Options when they update this week. Second, File Import Options now have the idea of being 'default' like Tag Import Options do. All your existing File Import Options will stay as they are, but any new ones will be in 'default' state, meaning at the time of import, your settings under options->importing will be used instead. This makes it easier to edit your File Import Options en masse, since there is only one place to go for most changes you ever want to make. The manage subscriptions dialog now has a 'overwrite file import options' button too, if you do want to mass-set some specific File Import Options across your subs. You might like to just set them all to default this week--I think I will. This 'default' concept is going to be applied to Note Import Options too. I am still thinking about how extensive the defaults system should be for File and Note Import Options. At the moment, File Import Options still just has the 'quiet' and 'loud' defaults in the options dialog, but I could expand things so you can set a default File Import Options for particular domains as you can for Tag Import Options. I'm thinking of combining them all into one tabbed edit dialog, so I may also extract the Presentation Options out of File Import Options, since those may have a different shape of 'defaults' than File Import Options most of the time. If you care about this stuff, let me know what you think.
full list - QT6: - thanks to a user's help, we are rolling out a Qt6 test build this week. we've been running Qt5 for a few years now. 6 is mostly a very large bugfix patch, and I am hopeful this update will relieve several legacy issues related to UI scale, colour support, draw flickering, and other unusual stuff. so far, it is working for me great. I'll be putting out joint 5 and 6 builds for 4-8 weeks, to iron out any big problems, and then I'll switch over to 6 releases exclusively. if you are an advanced user, please give it a go this or next week and let me know if you run into any traceback errors about deprecated method names or completely jank layout in the less used parts of the program - the actual changes you'll see are mostly style, just slightly different font spacing, things like that. if you have a system-baked Qt5 style that hydrus magically inherits, this will no longer work, you need to get a Qt6 version of the style (although I understand this is happening already for the popular styles, so you may already have them) - users on Windows 7 and similarly old OS versions are unable to run Qt6 programs, sorry! - I intend to keep the code 5-compatible, and users who run from source can choose whichever version of Qt they prefer, as here in the help: https://hydrusnetwork.github.io/hydrus/running_from_source.html#qt - the linux Qt6 build also goes up from ubuntu 18.04 to 20.04. let me know if you have any trouble, but it feels like it is time to update this too - . - file import options overhaul: - I wanted to do note parsing this week, but when I reviewed the whole job, there wasn't enough time to do it properly. so, in prep for a cleaner introduction of 'note import options' next week, I am overhauling how the other import options do some stuff - all file import options now support filetype filtering! it uses the same control as system:filetype or in import folders, but with some improved logic. on update, existing import folder filetype settings will be copied down to the file import options - file import options now work on a similar 'default' system as tag import options. existing file import options will stay as-is, but new ones will begin in a 'use the default settings at time of import' state. those defaults are editable under _options->importing_. for now I am not adding a 'use this file import options default for this web domain' system, but it might happen in future. let's see how this all shakes out first - the file import options button now has a right-click menu like the tag import options button - the manage subscriptions panel now has a 'overwrite file import options' button to mass-set FIO - cleaned up a bunch of old file import and import options code - . - misc: - system:filetype now remembers meta filetypes better. if you select 'all video', it will now still select all video even if hydev adds support for a new video type in future. also if you select 'video + animations', it'll say that rather than listing out every possible specific-type - fixed an issue where loading a favourite search wasn't always setting 'include current/pending' values on the buttons correct - fixed up a status display in the gallery downloader and watcher pages--if you pause an importer while it is doing work, it now says 'pausing...' as its status until any current jobs are finished. it was giving empty text before, as if it were finished already - fixed some unusual behaviour with downloader highlighting where the first query pended to an empty page was secretly highlighted for the next session load, and fixed the 'subscription gap downloader' also doing this and not obeying the normal 'highlight new downloaders if nothing already highlighted' option - improved the error when the 'make sure this directory exists' function runs into a file with that pathname - fixed a rare selection position error, maybe Qt6 only, when clicking in the thumbnail grid as it is loading - . - boring Qt6 code cleanup: - as a side thing, I set up quick-launch environments for QtPy5, QtPy6, PySide2, and PySide6 in my IDE this week, so I can now test all these situations and jump back in time no problem in future - integrated a user's patch to bring us up to Qt6 compatibility and did a little more work to get it backwards compatible with older qtpy and Qt5 - refactored the critical Qt boot setup and monkeypatching from QtPorting to a new QtInit module - migrated the hydrus code for keyboardModifiers, event-pos, and globalPos all to the Qt6 equivalents so the monkeypatching is always going to be on older versions looking forward - fiddled with QPoint and QPointF conversions a little so I _think_ Qt5 and Qt6 is always talking about the same type - updated build scripts and requirements.txts for the new situation - updated the help a bit for the new situation next week Note Import Options! I'm going to focus on it. I'll see if I can merge all the Import Options together, get the note merge tech we need working and tested, and then get some actual note parsing working in the downloader so we can play around with it.
>>17932 >>17934 Thanks, I will make sure to rename the option here. Sorry for the confusion around this. I think that was originally an experimental thing that got accidentally formalised. I will read through your posts again properly and talk more on Saturday, but if it helps at all, I have some FAQ discussion on my thoughts around filenames and hashes and folders for our situation as background reading here: https://hydrusnetwork.github.io/hydrus/faq.html#filenames
I updated to the Qt6 build and now the whole client is like 720p for whatever reason
>>17938 I Read the relevant FAQ section before ever complaining. Pic related is why you still need filenames.
Whenever I press f3 to begin tagging an image, the window that appears is left aligned, sitting right on top of the image preview, so I have to drag it to the right every time to view the image clearly as I tag it. Does anyone know if there is any way to move either where the window appears, or to move the image preview to the right?
When tagging new files and trying to add tags to multiple files at once, it's easy enough to select a group of consecutive files in the inbox and tag them all together, but is there any way to select non-consecutive files as a group? I would have through shift clicking would accomplish this, but it just selects everything in between two files I click, same as using shift+arrow keys.
>>17936 >Tag import options This reminded me, the default tag blacklist is too damn hard to get to. As it is now, you go Network > Downloaders > Manage default tag import options (opens new window) > Tag import options (new window) > Blacklisting on ... (new window, target). I think it should be much easier to get to (say Tags > Blacklist), and having the option to (un)blacklist a tag on a selected file via right click menu would be amazing as well.
I've noticed when I'm going through tagging my files, I often forget some things I need to tag on them that I need for every file or at least most files I'm currently tagging. Is there a way to for it to display suggested namespaces that aren't yet used on the selected file(s)? For example, pretty much every file I tag whether is it's completely safe for work, suggestive, or explicit, under the namespace "suggestiveness". Or for large amounts of files containing characters, things like whether there's a male or female depicted, using the namespace "sex". I'd like a display, probably underneath the recent tags list, showing suggested missing namespaces I've picked out that I use fairly often. Is there something like that?
And while I'm at it, is there a way to mass edit all instances of a certain tag or namespace? I've made mistakes in my early tagging. When it's just replacing a tag with another similar tag I think is better, it's easy enough to remove it, and then add the new tag, but this often involves retyping things like the namespace unnecessarily when it would be easier to just edit the existing tag. It's a lot more cumbersome if I fuck up a namespace. For instance, I didn't realize that "character:" was an already existing special namespace that's color coded in green. I went and used the namespace "name:", and now I have to manually change every set of "name:*" tags to the proper namespace. If I could just edit all instance of the namespace directly, this would be incredibly less tedious.
>>17945 And I now realize I'm retarded. I could have just deleted the default 'character' namespace in the tag presentation options, added the 'name' namespace, and turned it green. I think I might be too stupid to use hydrus.
How do I make the creator tags appearing over the top of every thumbnail with a creator namespace tag go away? I thought deselecting "on thumbnail top: creator - series - title" would accomplish this but it does not. >>17942 Nevermind, I am again fucking stupid. Ctrl select instead of shift select accomplishes this function.
Now that I'm making use of colored namespaces, is there any way to change the sort order by color? I'd like all displays of tags to sort my "namespaced" and "unnamespaced" tags below all the specific namespaces that I've given a color.
(91.26 KB 617x367 095026.png)

I use 125% scaling in Windows (text is just too small for my eyes on 27" 1440p). Previous versions of Hydrus ignored that, but now with QT6 it does not. However thumbnails look quite ugly now as they are scaled to 125% with no filtering. The same issue exists in the media viewer too, probably the preview pane as well. Perhaps you could make image rendering ignore the scaling if possible Fix: client.exe properties > compatibility > change high dpi settings > override high dpi scaling behavior: set to System (Enhanced) But now I have to reduce the size of thumbnails to get the size I'm used to once they scale 125%.
(319.40 KB 1451x1122 100753.png)

>>17949 But, that fix completely breaks video rendering Looks like I'm not using qt6 for the time being
>>9135 >Deleted Why would you do this anon? If you realized your complaint was in error, you should at least explain how in order to help others avoid the mistakes you made and better use the program.
>>17945 I've sped up fixing individual tags a bit with proper use the copy and partial copy functions, but I still haven't found a way to alter a namespace if want to change one.
can you make it so that if there's multiple same quality duplicates that match a search, only 1 will appear on the page each time. i got like 15 files that are same quality duplicates of each other and because of that they just keep showing up everywhere. Much more than other files so it's unbalanced.
>>17953 Why not just delete the dupes?
>>17918 No way to do 2 yet, but it is requested often and I plan it. Basically the way you rename tags in hydrus is the 'siblings' system under tags->manage siblings, which I completely overhauled last year to work with more reliable logic. The next expansion of this system will be to make it accept rules and work efficiently en masse, and for namespaces in particular for exactly the sort of renaming you want here. >>17920 Also >>17924 No. Hydrus doesn't handle multi-file collections very well yet, and it absolutely can't do 'playlists' that external programs could accept yet. In future I expect to add support here and there, and I'd like to add really nice cbr support one day, but I don't think we'll ever be as good as a program that is tuned for it like ComicRack. Feel free to experiment, but keep your manga out of hydrus for now. Same for music mp3s that you'd want to throw at foobar--hydrus isn't for that atm. >>17922 Thanks for the follow-up. This is odd, and I am not sure how is happening. There is some transitive logic in the duplicate system, so groups can merge in unexpected ways sometimes, but if you always click 'and delete the other', then there shouldn't be any groups of n>1 in your 'my files' collection. And there's no auto-dupe resolution yet, so the only way you can set dupes is in the filter or manual thumbnail commands. Unfortunately, your symptoms point to a logical problem on my part, as if the various 'media ids' here are being assigned to the wrong files. But this would probably only be feasible if the files were assigned as dupes completely randomly, as if all the dupes you find were weird false positives, like a picture of an anime babe linked up with a picture of a train. It doesn't sound like you are getting that, right? I am now afraid that the dissolve action somehow has bugs, that it isn't relinquishing ids fully sometimes, and it hasn't been noticed since the command is used so infrequently. To get even more concrete, could you walk me through the shape of one of these problems? For instance, could this have happened in your client? - Four pictures of Asuka, ABCD, all the same but different sizes - You set A>B, B deleted - You set C>D, D deleted - A and C are now somehow set as dupes Is that the sort of thing that is happening, that dupes are being set in confusing ways but they are ultimately correct? Or is this more like it: - Run duplicate filter - This picture of Asuka > that one, delete the bad one - Look up your dupes - A picture of Rei is now duped to a picture of a bus somehow I'd like to try to figure out exactly where the logic is failing here, so any specifics you can offer would help.
>>17955 >No way to do 2 yet, but it is requested often and I plan it. Awesome.
>>17924 >Why must hydrus replace a filename with the file's hash? >>17925 >>17932 >>17940 Thanks again. My general thing on the filename/hash thing is that FAQ, and I've made the fairly firm decision to do hydrus file storage as I have because of those issues. It wouldn't be technically impossible to track external files with custom filenames, but it would need a significant amount of additional maintenance and recovery code. I'd have to deal with files being renamed and changed and all that, which I have decided is outside of my scope. I'm going to try to solve the problem of managing a million imageboard style files without filenames, so anything that needs filenames is going to have to stay out or be hobbled a bit. Sorry! I used to use ACDSee for those sorts of images on my personal machine, which worked ok and had simple tagging support. I now use ImageGlass and like how quick it loads. It works fine for manga if you don't need bookmarks and other stuff that ComicRack etc... support. I agree that losing the ability to easily upload to filename threads is a shame. The Drag-and-drop filename pattern is undeveloped too. I want to rework that 'phrase' system into a proper generator object (the stuff you see under options->tag presentation for the thumbnail banners was an initial attempt at this, btw, but I never really liked how they came out). If you decide you still like hydrus and stay with it, please let me know how you like and dislike the rest of the program and the learning process in general. Keeping the help docs up to date and the UI new-user-friendly is a constant battle. >>17926 >>17927 Hmm, this generally isn't supposed to appear in any case. Normally, when a trashed file leaves the trash, all its thumbnails are removed from view. You have to do some advanced stuff to see these files normally. Could this session have been an old backup or something, loaded from before when the thumbs were deleted? Could I have made it more obvious somehow that these files were permanently deleted? I know that stuff is buried in the right-click menu labels, but something more obvious? Maybe instead of the trash-can icon, I could show a more permanent 'this is not available any more' kind of icon? >>17939 Thanks. Can you talk more about this? Was the window shrunk to a tiny size, or is it more like the window is the normal size but everything inside it is blown up huge? And maybe pixelly? I've got some reports that thumbnails are scaling bad in Qt6 if you have UI scale > 100%. Do you know your UI scale on that monitor? If you are on Windows, right-click on your desktop and hit 'display settings' and then look for the UI Scale %. Also, maybe you can post a screenshot of how it looks? Spoiler if nsfw, but I don't care the content. Blank page is also fine. You can also email it to me on DM on discord. Win+Shift+S on Windows.
>>17957 > It wouldn't be technically impossible to track external files I'm not really asking for that. The solution to the filename issue already existed within hydrus. Core problem is resolved. >autotag all imported files with their own filename under the namespace "filename:" >Rename all drag n drop exports to whatever is tagged under the namespace "filename:" The root cause of the issue was the drag n drop export option is labelled as an option for Discord, leading one to believe it's something that functions only for Dicksword, and that it's found under gui options. Though the latter might make sense and I'm just stupid for expecting a section dedicated to export options, or for export options to be paired with import options when import options are so much more robust.
>>17957 While I'm on the topic of the "filename:" namespace, would it be possible to select just this namespace to have its tags retain case sensitivity? This is a really unnecessary feature in my eyes, but it would make exported filenames look nicer.
>>17941 Yeah, check the last response here >>17913 >>17942 >>17947 Extra options for how shift and ctrl click do preview highlights under options->gui pages, and I'm moving them to thumbnails soon. Damn, thank you for the report about that 'turn off to hide' not working. I think that's just a bug, I'll fix it. >>17948 Not yet. You can only sort tags/namespaces by alpha right now. I'd like to add a forced sort system one day so you can make sure (creator then series) is always at the top etc...
>>17960 >Yeah, check the last response here >>17913 >>17913 >Should be able to save its last position too. Thanks. This helps immensely. >Damn, thank you for the report about that 'turn off to hide' not working. I think that's just a bug, I'll fix it. Ah, I actually did something productive for once. >I'd like to add a forced sort system one day so you can make sure (creator then series) is always at the top etc Would be fantastic and make it much easier to check for tags I've forgotten to add to images.
>>17949 >>17950 Thanks. I have had a couple reports like this. I can reproduce the problem on my dev machine so I will work on this. Hope to have something figured out for this week or next. Please check the later changelogs and give it another go when convenient. >>17943 Thanks. I'll keep this in mind as I rework the different import options in the near future. Some of this is tricky to make convenient UI for simply because hydrus has complex options, but I feel like the 'edit import options' dialog(s) should have some link to edit the defaults there, especially when you are set to use that default. >>17944 I can't think of anything specific here, although I like the idea of a checkbox workflow that makes sure you do the bare essentials. This sounds stupid, but could you maybe just make a search page for something like: system:archive -rating:anything OR -sex:anything OR -whatever:anything I think that says 'find anything processed that doesn't have all of these namespaces'. You can make an OR real easy these days just by clicking the 'OR' button. And to get a 'anything without this namespace' "-rating:anything", just type '-rating' and it should turn up as a special thing to select. If you are in help->advanced mode, you might also want to try the 'OR*' button, which allows very clever searches.
>>17962 >This sounds stupid, but could you maybe just make a search page for something like: >system:archive >-rating:anything OR -sex:anything OR -whatever:anything >I think that says 'find anything processed that doesn't have all of these namespaces'. This was basically my first idea for a bandaid solution, but it's an after the fact solution for once I've already missed a bunch of tags, rather than an at-the-time-of-orginal-tagging solution like the checkbox workflow you describe. Maybe in the future once more pressing issues are taken care of you could add that function, since it seems like a large addition that would require a good bit of work.
Session save confirmation dialogue box currently does not include the name of the session you are overwriting. Could you add the name of the session being overwritten to the text in the dialogue box? This provides a safeguard against overwriting a session that was misclicked when the user wasn't aware that they misclicked.
>>17945 >>17946 >>17952 It sounds like you have found the tags->manage tag siblings system, which replaces one tag at a time, but there's no support for namespace siblings (renaming 'creator:' to 'artist:' and so on) yet. But it is requested often and something I would like to do. It hope to make it the next large expansion of this system, making the current (very CPU heavy) logic work efficiently en masse and on more complicated tag replacement algebra. If you haven't seen siblings yet, check them out! Some extra help here: https://hydrusnetwork.github.io/hydrus/advanced_siblings.html >I think I might be too stupid to use hydrus. Nah, just takes practice. I over-designed all this shit and forget things about my own program all the time. >>17953 I'm not totally sure how to deal with dupes in normal searches. When I wrote the current dupe system, I was mostly concerned with getting the database structure solid, so the UI support as you've seen is almost completely sparse. I think I may do something like collections, where you can hit a checkbox and dupes will collapse into one thing, but navigating how all that UI should actually work and display and collapse/expand while still being fundamentally useful only makes me think of how much work it would need. One option you have here, btw, although it may be annoying, is adding 'system:file relationships' ('system:is the best quality file of its group') to your query. It'll filter only for kings. Give that a go, and if you like it, let me know. Maybe I can figure out a quiet 'always add this system predicate to every search' option, like the implicit system:limit. >>17958 Great, sorry again for the confusion. >>17959 Can't do it with tags, I'm afraid, as that's all locked in for lowercase. I'm working on a 'notes' expansion right now though, and I wonder if notes is somewhere we could start storing this 'rich' text metadata. Tags are for searching, after all, not describing, so when we have much nicer access to notes with better UI, system, and Client API, maybe we'll start parsing filename stuff to there instead. >>17964 Sure, thanks!
>>17957 Regarding the 720p stuff: Here's how it looks. The first two images are the Qt6 build while the last is the Qt5 which is how hydrus has always looked for me. I'm on a laptop so my ui scale is at 125% normally. Thanks for the screenshot thing, never new about that functionality on W10, I've still been using Win+PrtSc lol.
How come Hydrus doesn't see these two images as potential duplicates, even at search distance 6? Very odd. (nsfw)
This has probably been asked before, but do you plan on adding jpeg-xl support soon? If you haven't heard of it, it's a new image format that intends to be a strict upgrade and replacement for jpg, png, and gif files, by combining the features of all 3 while also having a lower filesize than any of them. It's really exciting and I already have some jxl files on my computer that I'd like to import.
Random small idea to add to the todo: skipping corrupted downloads. I've only encountered this with Sankaku, but some files stall at the exact same spot in the download and Hydrus eventually restarts downloading it. These files will always stall at the same location (as far as I can tell), and I have to manually skip them. >>17968 I think the Hydrus stance on new file types is that they're added when the libraries used for displaying images supports it. I think I remember this happening with webp too. I'm excited about JXL as well. I think I remember reading that eventually Hydrus might have a sort of "hooks" system, where you could say pass an image through Waifu2x from inside Hydrus and have the new image replace the old one. I'm assuming metadata about the original would be saved somewhere, so you could still copy the original's hash and such if you wanted. If such a system were implemented, I'd love to convert all of my JPEGs at least to JXL since you can losslessly convert them. >>17601 Dev, let me know if I was just dreaming about the above. I think I read something like that in one of these threads, but could be making it up.
the nitter downloader doesn't grab any tags at all for me even though it looks like it's supposed at least grab creator and title tags when I took a look at the parser.
It'd be cool if you could add web domains to the duplicate filter's scoring system and give each of the domains you add a positive or negative score. I notice that certain domains typically give higher quality images and other domains often give lower quality ones. It'd be cool if we could give that info to the duplicate filter to help the A/B pairs be more accurate.
For tags I haven't thought of a good namespace for, I'm currently using the namespace "related:", so when I feel like coming up with namespaces later, I can check all these tags appearing as results for "related:*". Is this necessary autism on my part, or is there a way to display all unnamespaced tags?
>>17965 >I'm working on a 'notes' expansion right now though, and I wonder if notes is somewhere we could start storing this 'rich' text metadata. I saw this smallfont filename moments ago and creamed my pants at how nicely it enhanced the joke. I eagerly await notes storing rich text data for filename exporting.
>>17955 >dupes It's certainly not the second option, I'm not having false positives, they're all correct duplicates that only differ in size or compression. But I don't think it's the ABCD issue either, I'm just having different pairs of images. Some are caught in the dupe filter, others only show in the search. My actions in the filter don't seem to affect what ends up in the search. I just don't get why the filter ignores the latter until I dissolve them so they could be analyzed again. I think the course of action here is as follows: As I mentioned, I made a parser using the API of a facebook-like social network site. There are albums of images and then there is the "wall" (I don't know if it's the proper term, like the main feed of a club) where those images are posted, all from the albums. So I have a subscription that pulls new images from the albums, and another one that checks the wall to mark the images that were already posted (it's set up to add a specific tag to all images from the wall). Now, images are the same, but the ones from the albums are older and sometimes use a different compression algo, thus often Hydrus redownloads the copy from the wall instead of just assigning a new tag to an existing pic. That's okay with me, since they usually end up in the dupe finder. But sometimes they don't and I have to check the search expression and manually dissolve them so I can use the dupe filter on them. I've yet to see any pattern in which images go where. I think that maybe I did something weird with the "associate and trust the source urls" or something like that, the parsing system I created is super complicated and I'm not even sure how it works anymore. Some images end up with 5-10 file urls that point to the "same" image, but use different parameters and compression, since the downloader runs once a day and requests a whole page of images, often visiting the same pics multiple times. But the urls shouldn't affect the dupe filter in any way, right?
(60.17 KB 527x618 1554128234016.jpg)

It seems like I fucked something up somewhere. The search tag for "system:everything" had disappeared on me a while ago and since I still see people talking about it, I can only figure I messed something up so it no longer appears. Also up to this point, I've been manually tagging thousands of my files by hand with just the tags I care about, but I'm kind of done with that, so I have hundreds of tags I'd like to normalize with the booru ones. I assume there's no way to change tags with underscores to spaces, right? I've only found the option that will display spaces, but not change the actual tag. Another thing is that I use "artist" tags in my db and I noticed that boorus (despite having it labeled as "artist") are importing in the "creator" namespace. Is there a way to have them imported as "artist" instead or change all mine to "creator"?
>>17975 It automatically gets hidden if you have over 10,000 assets, you can reenable it in Settings -> Search.
>>17976 Is 10,000 arbitrary, or is it around there when Hydrus starts struggling?
I have downloaders that import files from artists I like on kemono. Sometimes the files it downloads are zip files, so what I do with those is I extract the zip files' contents to a desktop folder then delete the zip file from hydrus, and i import those extracted files at some later time from that folder. The problem with this is that the zip files had a filename that I want to save. The name is added as a tag to the zip file itself like I have it set to do with all files, but I don't keep the zip files because they aren't really usable so I keep the contents and delete the zip file. Is there any way to have the zip file's filename not be lost and somehow be added as a tag to the content files that I import, maybe under a namespace like "archive file name:" or something. there probably isn't any way to keep the names unless hydrus had some "auto-extract" and import feature for archive files that it could then apply some tag logic to, but it sucks that i'm just losing that info, especially since files that I import from zips that get downloaded from hydrus don't have any tags at all because the downloader didn't download those files directly. It only downloaded the zips that contain them. so I don't even get creator tags or title tags automatically from those.
>>17978 I've run across many pictures that other people have tagged with "zipname:original_zip_filename" You could do that as well by adding the tag while importing. Easiest to do when you import the contents of only one zip file at a time.
is there a way to make that "discord" drag and drop option that you guys are talking about work for groups of files over that 200mb limit at a time and for more than 25 files at a time? it's okay if it's slower. It'll still be quicker than me dragging and dropping each file one at a time.
Any idea why my search query won't find any new content? The sankaku tag has millions of images I haven't downloaded. In the search log, it says "0 new urls found (20 of page already in).
>>17981 >Any idea why my search query won't find any new content? The sankaku tag has millions of images I haven't downloaded. forgot to mention this is a gallery downloader page
Any chance of a downloader being made for the sankaku channel beta? The site is restricting more and more features to the beta only and it's clear that the "chan" site's days are numbered, but it's the only way I can use the downloader with hydrus companion on post pages, so a downloader for the beta would help to fix that.
>>17980 At that point just export them all at once into a temporary folder. Drag and drop export seems to be for quick small operations.
>>17983 Are you having the same problems as >>17982 as well?
>>17985 everything seems to be working fine for me.
>>17981 also forgot to mention that the gallery query in question had already found 15k files in the last 4 months before it crapped out like this.
1.How often should I update Hydrus client? There is an update every week. 2.Will an older version's database work with a newer Hydrus client?
>>17977 I believe it is arbitrary as the limit where Hydrus begins coughing depends on how powerful is your machine.
>>17983 There kind of already is one on the cuddlebear repo but it's shit. As far as I can tell it only grabs more URLs at once. It still has broken downloads, grabs triangles, links expire after an hour, and it requires a manually obtained API key every 48 hours. This is all Sankaku's fault, not the downloader's. Sankaku is steaming shit but people keep uploading there for some reason.
I had a great week continuing the Qt6 work and Import Options overhaul. I fixed several Qt6 bugs, including fuzzy thumbnails at >100% UI scale, and collapsed all File/Tag Import Options into one button and dialog. Advanced users will be able to play with prototype Note Import Options and note parsing. The release should be as normal tomorrow.
>>17972 On second thought, even if such action was unnecessary, making all my tags namespaces, even miscellaneous tags not yet given a proper namespace, is a good thing. Whenever I make a typo tagging something that results in a new tag, it shows up in bright blue, so I know the fix it.
>>17983 Also requesting this. >>17990 >Sankaku is steaming shit but people keep uploading there for some reason. It has content that you can't find anywhere else, so I guess there's incentive to focus uploads there. Also, Rift's right-wing articles. :)
https://www.youtube.com/watch?v=qBJRJvHYS_k windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.Windows.Qt6.-.Extract.only.zip Qt5 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.Windows.Qt5.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v495/Hydrus.Network.495.-.Linux.Qt6.-.Executable.tar.gz I had a great week working on more Qt6 support and getting Note Import Options ready for advanced users to try out. Qt6 The Qt6 launch last week went generally well. There were a couple of little typo bugs as expected, but most users reported nothing drastic. I have fixed several issues and also improved graphical quality at >100% UI scale. Qt6 handles UI scale tech much better, but that also exposed all the better where my custom UI was failing. Thumbnails at 125% were looking pretty ugly, with nearest-neighbour scaling, so I knuckled down and did my homework on how all this is supposed to work, and I think I have it fixed. Thumbnails should look ok at any UI scale in Qt6, and their banner text too. My fixes apply to Qt5 too, but as far as I can tell that only really works comprehensively for 100%/200% scale. I will try to tackle the media viewer next week. If you are an experienced user with a backup, please feel free to try Qt6 out on your real install. If both Qt5 and Qt6 are available, the client will now default to Qt6, so you shouldn't need to do a 'clean install' like last week. My test of this went fine, but if there is some odd dll conflict when you try to boot, check here on how to clear things out and either revert to Qt5-only or try Qt6-only: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs Any reports on failures here would be useful so I can write in any needed 'delete these old files' rules to the Qt6 Win installer. As a reminder, afaik Windows 7 cannot run Qt6, so don't try it out if that's you. I will switch over to Qt6 exclusively in a few weeks, at which point I'll update the help and talk more about your future options, which will be: stop updating the client; move to a newer OS; run in Win7/Qt5 from source. Note Import Options Unfortunately I couldn't fit all this in again, but I've done work I'm happy with and have parts of it ready for advanced users to play with. Fingers crossed, the first simple version of this will be completely integrated next week. The File Import Options update last week went ok. I messed something up with the Presentation Options, so highlighting a gallery or watcher with a default FIO was always showing the same thing (new or all files) instead of what the default actually said. This is fixed. I have pushed FIO and TIO further together this week. File Import Options and Tag Import Options are now merged into one button and one dialog across the program. The dialog is tabbed, so you edit both sets of options at the same time, and in future, any new Import Options will work through the same interface. Note Import Options is going to do that very soon. I need to figure out how NIO defaults are going to work (probably the same way TIO does, based on network domain to allow for finicky per-site options as needed) and do some more parsing stuff, and then it should all link together. If you would like to check out what NIO looks like, please hit up network->downloader components->EXPERIMENTAL: check out Note Import Options. Read the help and tooltips, and let me know if it is crazy confusing or if you think I have missed anything simple, obvious, and important. Downloader creators can also play with note parsing. It isn't linked to anything yet, but you can see how it works. It is pretty simple, just a new Content Parser type. You set the name of the note and parse the text. It'll get washed through the NIO and then applied to the file. The main remaining problem is the parsing system can't yet do multi-line results. I'd like to tackle that in the coming week(s). If you make downloaders, please have a think about what notes if any you would like to parse and what tech I can add to make that easier. However to stop myself going crazy, I have decided for this first version to not allow parsed note names.
full list - Qt6: - if available, Qt6 is now the default. specifically, if the QT_API environment variable is not set, the default is now PySide6, and if that is not available, then PySide2 (Qt5). previously, the opposite was true - fixed a bug in last week's File Import Options default update with the new 'default' FIOs always showing 'new' files on a gallery/watcher highlight. the Presentation Import Options and the check to see if the pending local file domains actually exist now correctly look up the 'default' FIOs - Qt6 has much better UI scaling support than Qt5 for zooms other than 100%/200%. many Windows users are at 125%/150%, which revealed some pretty ugly thumbnails and thumb banner text in Qt6. thank you for the reports. I did my homework and read up on how this is _supposed_ to work and I have hacked pretty thumbnails at unusual UI scales. it also redraws itself correctly when I move from a 100% screen to a different one at 125%; let me know how you get on. I'm quite pleased - the media viewer is still slightly borked at >100%. the fix will be slightly different, but I have a plan and hope to have it sorted for next week. - fixed setting a mouse scroll wheel shortcut in shortcut options in Qt6 - as a reminder, as far as I know, Windows 7 cannot run Qt6. I will be dropping the Qt5 build in a few weeks, so if you are a Windows 7 user, have a think on what you want to do--either stop updating, move hydrus to a newer OS, or run from source on Win 7/Qt5 - . - note import options and note parsing: - note parsing is ready in parts. I am rolling them out for feedback from advanced users and hope to link it all up into a working system next week! - the different 'x import options', previously file and tag import options, and this week adding 'note import options', are now edited through one combined button and dialog. this 'import options' button dynamically adjusts to deal with how many types of import options the importer has and will relabel and tooltip and right-click-menu itself appropriately - this new button and multi-edit-panel show '(is default)' status in menus and tabs for quick referral - if you want to play with note import options, check out the new EXPERIMENTAL menu option under _network->downloader components_. read the help and tooltips and let me know if I have missed anything simple, obvious, and important - I have no default system for Note Import Options set up yet, so I have not added it for real. I will do something domain-based, similar to Tag Import Options. - I did however write simple note parsing support. any Content Parser can now have a 'note' parsing type, with a note name. downloader creators, please feel free to play with this, although it isn't complicated and isn't plugged in yet. I think we should review what sites have parseable notes and plan for that rather than start implementing for real just yet. the main limitation is that the parsing system can't do multi-line results yet - I'd like to see if I can get NIO defaults going next week, and this should suddenly all lock into place. multi-line parsing may be easy or a massive pain, I'm not sure yet - . - misc: - added two new checkboxes to _options->files and trash_ to turn off the yes/no confirmation when you copy/move file across multiple local file services - the 'overwrite this session?' confirmation dialog now says the session name you are overwriting - fixed a bug where thumbnails were not immediately updating their banner text on changes to the summary generator objects in _options->tag presentation_ - moved the 'focus thumbnail in preview window' checkboxes from 'gui pages' options page to 'thumbnails' - updated the text and enabled status of the 'BUGFIX: discord DnD' stuff in _options->gui_ - updated the job description texts in the file maintenance dialog, improving formatting and clarifying what happens in each missing/incorrect job, and what 'remove record' means precisely (it leaves no deletion record) - fixed a bug from last week when trying to edit your default tag import options - . - boring note import options cleanup and refactoring: - moved ClientGUIImport code up to a new hydrus.client.gui.importing module, refactored it into multiple files, and merged in some other edit panels for various import gui - merged the file/tag import options buttons into one cleverer and cleaner class. changed its update callables into nicer Qt signals. wrote a new tabbed edit panel for it to work with, and replaced all old import option buttons across the program with the new system - fixed an issue where the 'import options' buttons (now merged) would allow you to set them as 'default' through the right-click menu even when the button was set to not allow defaults (this state occurs in the options dialog, when you _set_ what the defaults are) - fixed the same when you try to paste default options into the button - brushed up and completed the note import options object - wrote a 'edit note import options' panel - fixed a small thing where the 'string-to-string' list widget wasn't setting the custom 'value' column header name correctly next week More of this, I'm afraid! I regret focusing on this for so long, but the work is going well and I want both done properly. I'll see if I can get the media viewer displaying good at >100% UI scale, knock off any other Qt6 problems, and then hammer out Note Import Options defaults so we can actually start parsing real stuff.
>>17994 >Qt6 "Hydrus.Network.495.-.Linux.Qt6.-.Executable.tar.gz" fails to launch in my system. Konsole reports "Error loading libpython3.8.so.1.0" I was looking in my package manager and the version available is libpython 3.7.3-2; so it looks like my Debian GNU/Linux 10 (buster) is a bit old and not suitable for Qt6. The Qt5 version launches fine.
I'm using version 494 (Qt 5) on Manjaro Linux. I used to use the Windows version, but moved to Manjaro in 2020. Ever since I moved, everything has looked terrible when I double-click on an image to open it. Looks pixelated and blown up. Dev, do you think Qt 6 will help with this? Never had this problem back on Windows.
I use the AUR package on Linux. I tried to do a naive update (changing the PKGBUILD to grab the commit of 495 instead and running makepkg), and Hydrus broke. The DB updated fine, but after clicking on a gallery query I had made in a previous session it froze and I had to force-kill it. Downgrading to 494 gave a warning about the DB being for a future version, but it seems to work fine. Again, I'm not surprised since I forced the upgrade without considering any of the under-the-hood changes. This isn't meant to be a bug report, just letting anyone else on an Arch-based distro know that you should just wait for the maintainer of the AUR package to do it properly instead of trying to force an update yourself.
(1.88 MB 1800x1200 68901103_p4.png)

(955.96 KB 900x1200 68901103_p3.png)

(688.61 KB 900x1200 68901103_p2.png)

(632.82 KB 900x1200 68901103_p1.png)

It's been six years since I tried Hydrus, and I don't know what features it has these days. I'm curious, what would be the best way in recent versions of Hydrus to connect images to each other that are more directly connected than pure, naive tagging would show, but not worth a tag of their own in something like a "pool:" namespace the way manga pages might be? For example, just tagging these 2hus with character names and the artist still wouldn't explicitly tie them all together, but they're also not worth a dedicated tag. On most online boorus like Danbooru, p4 would be the "parent" and the rest "children", but last time I used Hydrus I don't think such a system existed, and before I start using it again I'm curious what would be the right solution for such a situation.
>>18000 Select all>left click>Manage>File relationships>set relationship>set all selected as alternates To see them: Select all>left click>Manage>File relationships>view file relationships
>>17997 >Looks pixelated and blown up Screenshot needed.
>>17999 The AUR package has been updated, just werks on my machine. Qt6 seems to be functioning as well. >>17994 >File Import Options and Tag Import Options are now merged into one button and one dialog across the program. Nice. Could you look at maybe adding the ability to change the default tag import options within that menu as well? As it is now, choosing custom tag import options shows customization options, but choosing default tag import options does not (blank window). This should fix (some of) the concerns in >>17943
>>17996 Moar info on the Qt6 version failing to launch. I found the "hydrus_crash.log": >Traceback (most recent call last): >File "hydrus/hydrus_client.py", line 24, in <module> >from hydrus.client.gui import QtInit >File "PyInstaller/loader/pyimod02_importers.py", line 493, in exec_module >File "hydrus/client/gui/QtInit.py", line 33, in <module> >from qtpy import QtCore as QC >File "PyInstaller/loader/pyimod02_importers.py", line 493, in exec_module >File "qtpy/QtCore.py", line 15, in <module> >ModuleNotFoundError: No module named 'PyQt5'
Just had a problem where anything modifying client.mappings.db suddenly became excruciatingly slow: trying to import just one file with tags would lock the DB for ~5 minutes, PTR syncing would just lock the client indefinitely until force-killed, etc. Even tried ignoring it for ~2 days to see if it would fix itself, but the client never unfroze on its own, and PTR 'mappings' sync never progressed past the 8584/8592 point where it was stuck. Wasn't a hardware problem, SSD activity was over 100MB/s both reading and writing the entire time (which did drop the drive's estimated remaining life from about 89% to 86%, because whatever the hell it was doing wrote like 12 TB to the drive). PRAGMA integrity_check; on the db returned 'ok'. Tried a few things and what ultimately fixed it was vacuuming the affected database (database > review vacuum data > select the wonky database > vacuum). Took about an hour at ~15MB/s, and that fixed everything right up. Haven't got a clue what went wrong in the first place, but apparently the moral of the story is to defrag your databases from time to time lads, it could save your life.
I had a hiatus from hydrus for a few months. I had to delete a gallery downloader query for blonde when I came back this week because it incorrectly said it was done only 15k images into the search (there's 22k images to grab on sankaku). I re-added the query, but it won't even get past 24 URLs found from sankaku now in search log. I feel like I dealt with something similar to this in the past but can't remember how to fix it. Any ideas? No settings changed since I left on my end. I tried updating and that didn't fix it either.
>>17601 When will it support download for 4plebs threads?
I've been trying to make audio work on linux, but I haven't had any luck, any file with audio is simply muted. Any idea on what it could be? Is there any way for me to see a log or a console that outputs what's happening?
Still not sure how best to utilize collections for a series of ordered images, but it seems like I found a solution that's basically the same as how a series of sorted images is displayed in a file browser, using the original filenames and alphabetical sorting. >Import a set of ordered images while autotagging them by the filenames in the namespace "filename:" >Tag them all with a specific random tag under the namespace "set:", so long as it's unique to that set. >Can pull up the full set with that tag, even if I forget it as long as I find one image with the set tag >Sort by namespaces -> Custom ->filename And voila. I now have ordered sets of images, based on the alphanumeric order of the filenames they had before being added to my hydrus database. I figured this out as soon as I ran into my first set. Is this a well known function? Seems like people have been complaining about ordered files for a long time. The only problem I have with the method is that every time I want to sort by the filename namespace, I have to manually enter it after selecting "Custom". Suggestion for Hydrus dev. Please make it so that you can add custom namespace sorts to the dropdown menu for sorting by namespace, that way once you've decided to sort by a particular namespace regularly, you don't have to enter it every time you want sorted images. First picture is default sort by filesize. Second picture is sort by the tags under the filename namespace. I don't know how this would work if you give a single file in a set multiple filename tags, but you shouldn't ever do that anyways.
>>18008 Okay, but this thread has nothing to do with that. If you're using alsa as your default sond system, try typing "alsamixer" in a console or terminal window. You should have full mixer controls. If not, then hit F6, and select a different card. If you find a card that works, edit /usr/share/alsa/alsa.conf and change the line or lines "defaults.ctl.card 0" and/or "defaults.pcm.card 0" to whatever number card you found in alsamixer, and then restart the alsa daemon. If you're using portaudio or pulseaudio as your default sounds system I can't help you.
>>18010 Sorry, I didn't explain my problem correctly, my audio works fine on linux. Hydrus media player doesn't. It shows video output, images everything, but it doesn't play back sound. I checked volume controls and it's at max volume, I asked if there's any way I can see a hydrus log file or debug console where I can see what might be happening.
>>18012 Awesome. Thanks.
I have my backup folder in a location I've chosen instead of whatever the default is (I don't know if that matters for this question). If I delete that folder then select update database backup in Hydrus, how will it handle it?
(238.28 KB 1366x768 Screenshot_20220813_025852.png)

(78.54 KB 535x658 Screenshot_20220813_030500.png)

(148.65 KB 1366x768 Screenshot_20220813_030634.png)

>>18008 >I've been trying to make audio work on linux, but I haven't had any luck, any file with audio is simply muted. I had that problem before. The solution is to play audio and video files with MPV. ---> File/Options/Media See screenshots.
(323.04 KB 1240x535 Screenshot_20220813_034400.png)

>>18009 Take a look at my settings to gather collection/groups and display them in an ordered fashion.
>>17990 Yeah but that uses the api. I mean a scraper for the beta like we have for chan
Hello, I accidentally made a sibling of a character tag to a series tag. Obviously that's wrong and I meant to make it a parent. How the fuck do I remove it? I can't figure it out... All I can get is where it says "CONFLICT: Will be petitioned/deleted on add" then nothing happens after I click apply.
I forgot to post last week but I have an odd issue with both of the qt6 releases so far. Everything seems to function fine but all windows are missing the window manager title bars. it's kind of a pain in the ass because I can't drag or close windows. I'm using fedora/gnome/wayland, should be close enough to the ubuntu env it's built on. qt5 does not have this issue.
>>17966 Thanks. The thumbnails looking that much larger is actually 'correct' for 125% scale, I think. One thing is that Qt5 has good support for 100% scale and 200% scale, but not the smaller increments. It couldn't figure out how to scale your thumbs to 125% (so kept them at 100%), and afaik it mostly figured out the rest with some font-size hacking. Qt6 fixes this by scaling almost everything to any value. Some of the difference you see is also exaggerated in that Qt6 seems to have slightly higher font spacing than Qt5. Unfortunately, in Qt6, my custom widgets, particularly thumbnails and media viewer, are scaling in an ugly way. In the 495 release I put out, I made it so thumbnails at >100% should look a lot clearer. I am going to try to make the media viewer look nice this week. At the moment it draws at 100% and then scales up out of my control using nearest neighbour, so things like the zoom percentage are wrong too. I'll just put some time into it. Please let me know how things go in future versions. Ideally hydrus should scale to 125% if your OS is UI scaled that way, although I have some mixed feelings on the whole subject, so if I ever discover a 'force 100%' mode, I'd be very happy to add a checkbox or QSS stylesheet or whatever I need to. >>17967 Thank you, this is a really interesting pair. I've just had a close look at how the similar files data is generated for each, and I can't exactly explain it. I think what's happening is due to bad luck chance, that in a technical way the images are different even though when you zoom out they look very much the same. Both are fairly high res, and if you zoom in a lot, ignoring the little promo banner one has, the really high res one is much clearer on the stippled background around the edges. Also, the lower res one is very very slightly stretched--shorter on the vertical axis and wider on the horizontal. Normally my similar files system can handle differences this small, so this is a useful example of it not working. I'll see if I can put some more time into this and figure out precisely what is going wrong and how a future 'perceptual hash' can fix it.
>>17968 >>17969 Yeah. I'd love Jpeg XL, it seems like we finally have a cool format. When Pillow or OpenCV support it, or some other easy python library appears that can wangle a jpeg xl file into an array of pixels, I'll add it! https://python-pillow.org/ (https://pillow.readthedocs.io/en/stable/reference/plugins.html) https://opencv.org/ >>17969 Thanks, I may be able to skip a corrupted download like this. I'll see if I can remember the exact point of each interrupted download, and if it happens three times in the same place, that's definitely not a coincidence. You weren't remembering wrong about the auto-converter ideas either. We'll have to be careful how we navigate the changing hashes, but once the duplicate system gets some better merge and file relationships tools, I think we'll be in a good position to do waifu2x and anything else we can think of. As Jpeg XL et al get big, I'll be waiting for an ML to appear to do HDR conversions of all our old files, too, so we can finally break out of sRGB. >>17970 I am sorry, I am not 100% up to date on the nitter parser, but I just did a test and it seemed to get the creator tag ok for me, it was just the username looks like. Are you sure your default tag import options (network->downloaders->manage default tag import options and then 'file posts' or 'nitter' url class specifically) are set up to get those tags? If you are ok going more advanced and hit up the nitter parser under network->downloader components->manage parsers, and then put in a test nitter URL and download and do a test parse, does it grab the tags there?
>>18020 >when you zoom out they look very much the same. Does the similar image check always use the full version of each image? I would think comparing the thumbnails generated for each one would easily find dupes like this.
>>17971 That's an interesting idea. I vacillate, but I am mostly keeping the duplicate scoring system on hold for now. I really want to completely replace my current hardcoded rules with a dynamic system built by this 'metadata conditional' object I've been planning, where you'll basically be able to arbitrarily say 'if the file A has xxx metadata, and the file B has yyy, then give score zzz'. When you can set whatever metadata you like, then I never have to write a new hardcoded rule with options and stuff, I can just work on the metadata conditional to allow more clever rules there. (this object will also appear in other parts of the program, like deciding what border colour to give thumbnails and so on) >>17972 >>17992 Do what works for you, but if you meant 'can I search for unnamespaced tags', then you usually can. Stuff like 'system:number of tags' lets you specify a namespace, and if you leave it blank, it'll search unnamespaced exclusively. Most of the namespace selecting parts of the program allow this. One big exception though is searching, which I mean to 'fix' soonish. If you search for 'gun', the program will also search for files with 'series:gun' or 'creator:gun' as well as unnamespaced 'gun'. >>17974 I am glad it isn't that dupes are being assigned randomly. I am going to keep your symptoms in mind, and I would like you to keep watching and see if you can spot a specific example of something going wrong where you know that file A went to file B, and then file C was then with B or whatever the particular scenario is. I'd like to hear what happened along with what you thought happened. By default, URLs are merged in the duplicate filter, from worse to better. You can check these options with the 'edit default duplicate metadata merge options' button on the duplicates page. Normally this isn't a huge problem in terms of downloading, but if your situation is mega complicated, maybe there are some odd conflicts going on here. If a file has multiple URLs on the same domain, hydrus usually skips any quick 'ah yeah, I know this URL, I already have that file, I don't need to download it' checks since it can't be sure with the multiple URLs; it then downloads the file so it can check for real whether it has it or not. Maaaaybe if you have file import options set not to exclude previously deleted files, then a tangle of merged URLs is somehow causing some files to be reimported after delete? I am not sure. I am sorry to say your situation sounds complicated, and at this point I think I have to recommend your best solution for now is to try to simplify things as much as possible so when something goes wrong it is more easy to pick through what happened. Let me know how you get on!
>>18022 It is a bit more complicated, it basically generates a mathematical impression of the component waveform shapes of the image(s) and compares those very very quickly. If you have ever done fourier transform, it is basically the same thing: https://apiumhub.com/tech-blog-barcelona/introduction-perceptual-hashes-measuring-similarity/ The shapes being compared are 64x64, 32x32, 8x8, that sort of size. So normally minute changes at 4000x4000 scales are immaterial. I looked and the scaled versions of the images were the same in some areas and different in others (excepting the obvious difference of the corner with the banner, which shouldn't be a huge deal), and those differences were passed on to the DCT even moreso. So my guess is that there is a funky mathematical thing going on when it generates the DCT, where the blurry vs clear shapes in the background are just on the knife-edge here. I just did a bit more searching, and the files do match, but at hamming distance 10. 'speculative' in hydrus is distance 8. So the phash is failing a little. I'll do some tests and work on this when I can find some time. Maybe the solution is to scale prospective images to 1024x1024 or similar first, something the human eye can see better, and that will remove some high frequency waveforms that are interfering with the low frequency ones or something??? I am not a super expert at this, and I need to examine these more to really talk better about it. >>17977 Yeah my general thing is 'don't search for 10,000 images in one page' since things do start to chug/log out around there, and it doesn't make for a good workflow anyway. That is a LOT of images to go through, so no need to put it in one search. Once system:everything goes above that value, it loses its value as a search predicate, and I hope that most users who have been using the client that long have also learned to use real search terms by then too. There's an implicit search limit of I think default 10,000 under options->search too. If you want to try out 50k searches or whatever, you can disable it there. >>17980 >>17984 Yeah, try thumbnail right-click share->export->files at that point. The discord DnD hack has to export the files to a temp dir every time you start any file drag and drop, so it isn't appropriate for large file selections.
>>17981 >>17982 >>17985 >>17986 >>17987 Sankaku limits their search results to about 20-25 pages for non-logged in users. You can test this in your web browser yourself by going to page 25 on a set of results and then going to the next one. I think it 404s or just gives empty. They have changed the rules and precise limits several times over the years. If you are logged in you get more, I think 1,000 files (50 pages?). As for the beta >>17983 I'm afraid I don't know much about it. I tried to help the guy making the API downloader several times, but it is geared up in a complicated way with tokenised URLs that time out and things. Hydrus's downloader just isn't geared for that sort of advanced queuing yet. Sankaku have always have a lot of bandwidth problems and set their site to be difficult to automatically access. I recommend you look for your stuff in other places as much as possible. >>17988 1) As often as you like, depends on your schedule. I recommend people not try to update more than 10 versions in a go (e.g. installing v499 on top of an existing v489 install), so maybe once every ten weeks is a convenient soft cap? But there have been guys who haven't updated for four years, no worries. Once a week is only for enthusiasts or people waiting for a particular fix or new feature. 2) Yeah, you basically just install/extract the new version on top of the existing one, and on your first boot, the software updates everything. Check the help here for more info https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#updating And let me know if you don't understand anything. There's important information about backing up there too.
>>17996 >>18004 Damn, that's a shame. That problem about PyQt5 is it trying to load that module as the last resort, since it can't, presumably, load PySide6. Like the Win 7 users, you'll want to think about whether you want to stop updating, find a newer OS, or keep running but from source. The official switchover will be in 2-4 weeks I think. I will be updating the help and requirements.txts in the coming weeks to talk about this more. >>17997 >>18002 Thanks, I am going to work on that specific problem this week as part of Qt6. Please give 496 a go and let me know if you are fixed. i did my homework last week and learned about how UI Scaling is supposed to work in Qt, and in Qt6 it is neat. Thumbnails should look better for you in 495 already. >>17998 Deepest lore.
>>17999 Sorry to hear about the trouble. Normally I would say I am surprised this did not work, since generally if the database updates fine, then multiplat is no problem at all. However I don't know how the AUR works, so I can't talk too cleverly. Sounds >>18003 like maybe it got fixed? Let me know if you can't figure this out and would like some help putting your session or whatever back together. A clean install is always an ok idea, at the very least to rule out problems with dll/so conflicts: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs >>18003 Yes, I absolutely want a nice easy link from that dialog to the actual defaults you are going to be using. It is unfortunately tricky, since by its very nature the downloader often doesn't know what defaults it will be loading until it actually has an URL to load them, but at the very least I can let you launch the 'edit defaults' dialogs from this dialog, just so it isn't a chore to navigate all this. >>18005 Damn, thank you for this information. I had written off vacuuming as particularly useful except in the most busy of databases. Did you happen to do some ANALYZE work as well, or just the VACUUM? Could you also estimate how old your database is? >>18006 Sorry, as here >>18025 Sankaku limit the number of gallery pages you can search. To get around these sorts of filters, you can sometimes add in metatags like 'id<123465' to sample a different section of the long ribbon of results you want. Not sure if sankaku provides that. Another solution is to add an additional tag and its negation. evangelion + shinji, evangelion + -shinji splits the evangelion search into two smaller parts that can be searched separately.
>>18007 Assuming 4plebs doesn't have any anti-crawler trickery or Cloudflare gates, I should think it can do it now, but it would need an advanced user to build a downloader for it. EDIT: It looks like there is full support already: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/4plebs Can't speak to how good it works, but it looks like an altered clone of the 4chan downloader. This isn't something for super new users, but if and when you get some time with hydrus, you'll learn that you can download the pngs in that page and import them under network->downloaders->import downloaders. >>18009 >Is this a well known function? I originally wrote this sorting system for a 'pages:x' namespace, for comics with page:3 and chapter:2 kind of tags. Several very advanced users and myself have played around with these concepts throughout development, but I have never been really happy with my implementation. It works in small controlled groups, but when you get to real world data it just gets so fuzzy and messy that it is a real headache to handle with such a utopian system. I have made the system more forgiving in time (an example of a complication of real world data, and for your spoiler, is a double-wide manga page with page:16, page:17 tags), with logical hooks and tag filtering to catch more complicated situations, but I think it will need a lot more time and thought to be more useful, especially to regular users. Ultimately I want to revisit this seriously when I get to CBR/CBZ support and the file relationships update. Hydrus is going to get 'hard' file connections one day, where pages specifically say in the database what comes next in a multi-file series, and that will need a bunch of new UI to manage it. I'm hoping that regular collections will benefit from some of that new UI and tools. Please keep playing around with the system and let me know what you think. Bear in mind I don't like it, ha ha ha, so any kind of off the wall idea is ok, but some of this tech gets so complicated that even 'little' changes would need too much work to justify. Note btw, since you are doing filenames, that this sorts numbers in a clever human fashion, so it doesn't need zero padding. It is fine with zero padding too though. 'image3.jpg' should sort before 'image20.jpg'. >>18014 It should be fine and basically just make a new one at that location. Looking at the code, if the destination folder itself is missing, it looks like it changes its message boxes to say 'create a new one' and stuff. Let me know if you try this and it runs into trouble. It doesn't check that folder regularly, only when you say to 'update database', so if you need to delete it to temporarily free up space or something, no worries.
>>18018 What you want to do is open 'manage siblings', and then type one side of the sibling into the left input and hit enter so it is in the 'set tags to be replaced' box. This will load up all existing sibling pairs that contain that word. You've seen this already with the 'CONFLICT: ... on add' line. Instead of trying to 'add' any of the stuff you entered, double-click above on the row that appeared. It should become a (-) deleted-once-you-ok-the-dialog pair. Then ok the dialog, and in a few seconds it should disappear. Sorry for the confusion, I'll add an explicit delete button. >>18019 Thank you for this report. I am very sorry that I cannot help much--I am not a Linux expert, and I do not know about the particulars of the different window managers. My best guess is that Qt6 is importing some style you have 'more effectively' or maybe it is trying to import a system style but then bugs out. I assume you don't have missing title bars on any other programs, as in you don't ever want that anywhere? A user was telling me that some system Qt styles that were built for Qt5 are not built for Qt6 yet, so if you have a system Qt style, maybe it bugs out with Qt6? If you have another Qt6 program (I heard OBS updated to Qt6 recently), does it have any similar trouble? I know that Qt6 isn't supported on Ubuntu 18.04, so if your Fedora is close to that, could that be the problem? I make the Qt5 build on 18.04 but Qt6 on 20.04. Title bars are an even odder one since I really have very little control over them; it tends to be the OS that draws all that stuff.
>>18028 >that this sorts numbers in a clever human fashion, so it doesn't need zero padding. Oh thank god. I've been doing that shit for so many years.
>>18030 Me too.
Can you select multiple files and view the alternates of all the files?
It's been two weeks and I've only tagged about 370 out of nearly 30000 files. My autism is not great enough for this, and I'm ready to start syncing with the PTR. But there's this very scary warning when I attempt to do so >6GB of bandwith >50GB of database growth Syncing tags with the PTR should just be checking hashes against each other and then adding PTR's tags to any files I have that are also in the PTR database, right? Guide says it downloads all existing tags to my computer to do this. I assume it downloads the hashes too, because it would only make sense that it needs those to connect to the tags. But 50GB? 54 million files? Jesus. I have enough space, just barely with some wiggle room to spare, but I'm not sure about that bandwith, as I share internet with other people. That's only for the initial download right?
I forgot an important question before I start using PTR. Do tags generated by PTR end up in dowloader tags, my tags, or does a new section for PTR tags appear once you start using it?
>>18033 >Syncing tags with the PTR should just be checking hashes against each other and then adding PTR's tags to any files I have that are also in the PTR database, right? That's what I originally thought too, but no. It adds the tags for every file to 'ever be in the PTR to your database, which is a lot. Millions. It does bulk up your database by a lot, but that's the tradeoff for having many (but not all, or even close in my experience) of your files tagged "automatically" so that you don't have to do it yourself. I eventually got rid of the PTR because it was just too big of a burden for the gain I got out of it. I'd only recommend it if: >your files are mostly the kinds of things you'd find on boorus this is the bulk of what's tagged in the ptr, mostly from scraping those very boorus for their tags >you have an ssd a lot of processing and updating constantly, and you need a fast drive or you'll just keep falling farther and farther behind unless you leave your computer on to process 24/7 >your collection is constantly growing at a decent speed if it's not then I'd personally just do a one time sync. No sense constantly syncing with the ptr if you already got the tags for the files you have and add new files slow enough that you could easily just add the tags for them yourself It didn't end up being worth it for me, and most of the benefit I got from the ptr was achieved by just downloading from the boorus that many of the ptr's tags came from anyway. But if the above sounds like you, then try it out. A word of warning though: there's certain parts of the ptr that get baked into your database once you add it and there isn't currently a way to remove them even after you get rid of the ptr. Hydev said he's working on getting everything fully removable, but I don't know when that'll be, so your db will be bigger (not by a huge amount but still) even if you change your mind later and remove the ptr. >>18034 the ptr has it's own section, like all remote tag repos do.
It just stopped. >Bandwith limit exceeded >513MB today What? I thought it was doing first time updates first? Have I already started getting the PTR database? And if so, only half a gig a day out of 50GB? It's going to take four months minimum just to sync, before adding on the time new tags and hashes added in the meantime. >>18035 One time sync sounds like the best option. Maybe resync at long spaced out intervals. I'm very picky with my images, so my collection doesn't grow fast, although part of that was caused by a reluctance to save things that I would then have to dedicate time to sorting, so I'll be grabbing a lot more pre-tagged things from boorus now that I have hydrus. I only have about 30000 files over about 8 years, so that's an average of 10 files a day, and I'd wager several thousand of those at least are files particular to my computer that won't exist in the PTR. Probably at least a third are from boorus, and many more are likely on boorus despite me not sourcing them from a booru, so I can just reverse search new images from untagged sources to get a booru version with tags. >A word of warning though: there's certain parts of the ptr that get baked into your database once you add it and there isn't currently a way to remove them even after you get rid of the ptr. Hydev said he's working on getting everything fully removable, but I don't know when that'll be, so your db will be bigger (not by a huge amount but still) even if you change your mind later and remove the ptr. Current warning is 50GB as of mid 2021. How much of that is permanent? Once I get all the existing files I have as tagged as they're going to be by PTR, I'm definitely removing it if it's that simple.
>>17994 >As a reminder, afaik Windows 7 cannot run Qt6 Have you tried any workarounds? Here's something https://forum.qt.io/topic/133002/qt-creator-6-0-1-and-qt-6-2-2-running-on-windows-7/2
It probably doesn't matter to me, but out of curiosity, why are there "deleted mappings" among those downloaded from the PTR?
>>18037 Different anon here, supposedly the QT6 version of qBittorrent needs Windows 10+, but I was able to use it on Windows 8.1. There were some visual glitches but everything functioned.
>>18025 >Sankaku limits their search results to about 20-25 pages for non-logged in users. You can test this in your web browser yourself by going to page 25 on a set of results and then going to the next one. I think it 404s or just gives empty. They have changed the rules and precise limits several times over the years. If you are logged in you get more, I think 1,000 files (50 pages?). Thanks for the reponse. Looks like hydrus is giving up after 2 pages of results though (also I had my account logged in before and got waaay more than 50 pages worth for this tag since I was at 15k images before I went on hiatus). I think I'm using a non-default sankaku parser from cuddlebear or someone that bypassed an issue the default parser had. It does seem that other queries I've never even tried before from sankaku are also getting stuck after searching just 2 pages. And actually, now that I look, it looks like my other hydrus queries for sankaku are working past this page limit. I'm leaning toward this being some peculiarity with being at 15k images and then my gallery downloader getting out of sync from all the new images that were added while I was on hiatus? Just spitballing. Also if
>>18025 Ignore the second paragraph, that was supposed to be deleted since it was mistaken.
(164.23 KB 549x413 ancap.png)

>>18026 >you'll want to think about whether you want to stop updating, find a newer OS, or keep running but from source. Reporting. I upgraded to Debian GNU/Linux 11 (bullseye) and the Qt6 version launches fine.
(108.25 KB 928x1161 x.PNG)

>>18027 >Did you happen to do some ANALYZE work as well, or just the VACUUM? Nope, just the vacuum. >Could you also estimate how old your database is? Like 4 months, and synced to the PTR about a month ago. The weird thing was that it didn't get progressively slower over time or anything, it just went from working fine one day, until it hit some critical threshold of fragmentation, or something, where it had to shuffle around the entire database every time it wanted to write anything. Oh, and reading was fine, as long as no writes were going on. I did try turning on profiling shortly before vacuuming, but it doesn't seem like it captured anything interesting. The only long operations logged (other than the vacuum itself) were waits for a thread lock, probably while some previous SQL operation that didn't make it into the log, somehow, took forever. I do have a backup of the fucky DB I could roll back to if you want me to try anything specific on it and see what happens.
How many files can Hydrus handle? I have only imported 400k files but it took more than 3 days with the client constantly crashing. I have 5 millions (~4.5TB) files in total. My images are already tagged so all Hydrus needs is to import them. Also is it possible to run it in headless mode?
Jesus, this processing sync takes forever. Only got 500MB of the PTR DB. It took a short while to jump to 30%, half a day to jump to 70% and then all night to get to 97%. Is the length of time each section of the processing sync takes increasing linearly? Will it even be possible to sync once I've gotten a good chunk more of the PTR?
>>18044 Additional information: I use the API for importing and tagging because file > import files lags and freezes when importing a directory of more than 150k images.
>>17919 Added the search page too. You need to use the url importer instead of the simple url downloader otherwise it just defaults to hydrus' parsers.
>>18029 Mr. Hydrus Dev, say an image has the tag "patchouli knowledge", how to set a shortcut for the following action? >open a new search page for "patchouli knowledge" by ctrl + leftclicking the tag "patchouli knowledge" Alternatively I would like to modify the context menu when you rightclick a tag so that "open a new search page for 'tag'" is the first option shown, just like how "open link in new tab" is the first option in a browser.
How can you open the manage tags window from media viewer with keyboard? If you can't then make that a suggestion to add, since being able to edit tags on a sequence of images without touching the mouse would be a godsend
>>17955 >The next expansion of this system will be to make it accept rules and work efficiently en masse, and for namespaces in particular for exactly the sort of renaming you want here. ehhh... is it okay to rename the "creator" namespace to "artist" using a SQL query? I just went inside "client.master.db", went to the "namespaces" table, and renamed the namespace with "namespace = creator" to "artist". I tried using Hydrus afterward and had no problem. Did i fuck up something? Is renaming a namespace directly inside the database the way to go?
(50.64 KB 1022x576 aaaaaaaaaaaaaa.jpg)

>>18050 >I just went inside "client.master.db", went to the "namespaces" table, and renamed the namespace with "namespace = creator" to "artist". That's bold and borderline insane.
Good day Mr. Dev, I come bearing a niche bug report. I'm running a Wayland compositor on Linux, and when I try to play media with mpv, the mpv instance appears in another window, instead of being embedded. Also it causes the gui to shit the bed a little. I think the issue has to do with this: https://github.com/jaseg/python-mpv/issues/196 I came across a solution that was aimed at GTK, but the same concept apparently applied to QT as well: https://github.com/jaseg/python-mpv/issues/222#issuecomment-1179721210 This isn't a terribly urgent issue since I don't really use the Linux machine, but when you get the chance to, please take a look.
Despite being properly logged into Sankaku in Hydrus, my queries aren't returning images that require being logged in to view. Such as images tagged with contentious_content. I checked the login script using the built in testing feature and Hydrus believes that it is working properly, also the login status under "logged in now?" says yes. Not sure what's going wrong. Any ideas as to how this might be fixed?
>>18053 Which sankaku downloader are you using? The beta downloaders don't work with the standard login script
When downloading via url from sites like gelbooru, and there's Japanese text with an english translation upon mouseover, is there a way to make the translated text appear in the notes for the file downloaded? The boxes with English text are literally called "notes" in the html.
I had a great week making Qt6 work better, including making the media viewer work well at >100% UI scale, and getting the first version of note parsing finished. The release should be as normal tomorrow.
>>18049 F3 check and edit all shortcuts in the shortcuts settings menu from the main window
>>18054 I tried the default "sankaku channel tag search" downloader, which is what I've been using without issues for years until recent weeks. I also tried the Cuddlebear repo's "sankaku chan beta tag search", but as you mention I quickly realized that there wasn't any login script available for the beta website built into Hydrus so I wasn't surprised when that didn't work.
https://www.youtube.com/watch?v=2jzugX2NMnk windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.Windows.Qt6.-.Extract.only.zip Qt5 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.Windows.Qt5.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v496/Hydrus.Network.496.-.Linux.Qt6.-.Executable.tar.gz I had a great week. There's more Qt6 work, and the first version of note parsing is finished. UI Scaling and Qt6 The media viewer now understands UI scale and will draw images with all available pixels and state the current zoom correctly. Previously, if you were on 150% UI scale, it would draw at the smaller 100% size and then nearest-neighbour scale up. Now it compensates for the UI scale completely, drawing at that size, so if you look at a 4k image on a 4k screen, it'll look the same no matter what your UI scale is. Some funny math went into this, particularly as I stitched the image tiles together through several different coordinate spaces, so if you are >100% UI scale and see a grey or white line flickering anywhere as you pan an image around, let me know your details. After last week's nicer thumbnails at >100%, a user let me know that the thumbnail banner text was actually looking pretty bad at 100%. Looking at screenshots of the two situations made it obvious. I chased down what was going on, which turned out to be an annoying technical problem with no easy solutions, and figured out an ok solution. If all your monitors are 100% UI scale, the thumbnail banner text should now be antialiased. It isn't perfect (and even looks blurry with some semi-transparent background colours), so I may revisit this, and it may be the underlying Qt issue causing the ugly font gets fixed one day anyway. The rating shapes (circles, squares, stars) are now antialiased too, and I re-did the star shape to look better. note parsing This is still for advanced users, but if you know the downloader system, feel free to try it out! Note parsing works now! All the downloaders have note import options, there's a defaults system, and it is all wired into the parsing system. The main remaining limitation is the parsing system still can't handle multi-line text, but if you make a note parser and download, it now gets added to the file. If you are a downloader maker, please try it out and let me know where it falls short. I intend to get multi-line parsing working in the coming weeks, and then I will update the default parsers for hydrus to grab artist notes. For the defaults, I updated the 'manage default tag import options' dialog to also do note import options, so you can grab and rename notes from specific sites in different ways. The initial defaults I set are fairly simple 'get everything' options. I would like to see how these new systems shake out in the real world, especially as we deploy new downloaders to many users, and maybe I will revisit them.
full list - note import options: - the client now has a system to set default note import options. it works exactly the same as default tag import options and shares the same UI, now named _network->downloaders->manage default import options_. you now set tag and/or note import options for a particular domain. I don't think you'll have to touch the note defaults until this system is really going and we learn more about what we want. I have made the initial defaults get all notes with some simple conflict resolution that won't discard any data - all url pages, watchers, watcher pages, gallery queries, gallery downloader pages, and subscriptions now have a note import options. by default, they are 'default' - the edit subscription dialog now has a button to set note import options _en masse_ - all the behind the scenes stuff that connects and powers these systems is done. note parsing now works! advanced users, especially downloader makers, are encouraged to play around with this for real. the remaining hurdle is still multiline parsing support - notes now have a cleaning system before they are saved. to start with this week, they are now trimmed of leading or trailing whitespace or newlines - . - Qt6: - the media viewer now draws correctly on UI scaled displays. If you are at >100% UI scale, it will now render images beautifully, using all available pixels, and state the correct zoom percentage. you look at a 4k image on a 4k screen, you now see 4k, no matter the UI scale. previously it was rendering at 100% UI scale coordinates and being nearest-neighbour scaled up - after several sad hours banging my head against font metrics, I finally discovered the magic flag needed and have improved the font quality of the thumbnail banners when you boot the client with only 100% UI scale monitors. should be anti-aliased now, although if you have a semi-transparent banner colour it may look slightly jank for reasons I still need to investigate. - I fixed the 'don't process the click that activates a media viewer into the shortcuts system' hook for Qt6 (and still working on Qt5). it is a little smarter now, too - . - misc: - the new import options button is now an arrow-menu button. the secret right-click menu is no longer hidden. I also did some behind the scenes stuff to make it so all these arrow buttons spawn their menus on your cursor when you click, rather than hanging off the bottom-left corner of the button proper - rating stars of all shapes are now anti-aliased - greatly improved the shape of the 'star' rating star - moved the 'checker options' button on watcher highlight panels down a bit. maybe it'll get integrated into other import options one day--I am still thinking about it - archive/delete filters will not present 'delete from hard disk' as a final choice if the current domain is 'all local files'. I thought I fixed this a couple weeks ago, but there was a legacy issue - fixed some real jank logic when setting the tag domain in autocomplete dropdown widgets. this got messed up a little with recent updates to file and tag domain searching. I reworked the signal path and fixed some weird update bugs and situations where you could seemingly set 'all known files'/'all known tags' - . - boring code cleanup: - refactored all zoom code from the media viewer canvas to the media viewer container. the canvas no longer manages zoom numbers or container size - refactored all container-position-tracking code from the media viewer canvas to the media viewer container and cleaned it - updated the media viewer container to recognise UI scaling and adapt the stated zoom to reflect the raw pixels on screen, not the device independent coordinate system - updated the native animation widget to recognise UI scaling, adapt its underlying renderer resolution appropriately, and draw that super-resolution frame to the canvas - updated the static image widget to recognise UI scaling, adapt its tile coordinate system and resolution appropriately, and scatter the ethereal powder of the cleansed ancients across the QPainter in order to stitch the arbitrarily zoomed super-resolution tiles together on a sub-pixel canvas with no visible seams - the animation and static image widgets also recognise changes in the current UI scale--if the current monitor changes or you move across monitors with differing UI scale - updated some old pubsub update calls in the canvas code to Qt signals - cleaned up some old const definitions in canvas code - refactored and simplified some test methods related to the canvas container and media show actions - cleaned up some old painter code and hacks to simpler alternatives - cleaned a tangle of file/tag domain update code in the autocomplete dropdowns - cleaned up some options getting/setting methods in the downloaders next week After three weeks at this, I am now super behind on other work. I would like to just kick out some normal bug fixes and so on, keep multiline text parsing in mind, and also turn to some long-delayed serverside admin workflow improvements. I would also like to switch the installer exe over to Qt6 next week.
Any chance of getting gui scaling as a setting that isn't linked to OS scaling? I like 125% best most of the time, but it makes Hydrus way too big for me with qt6
>>18058 >Yep, you're not the only one having issue downloading from sankaku. Except in my case it isn't contentious content, it's just normal content you can view when logged out (I'm logged in though anyways). I hate to ask, but I don't have enough time to learn how to make/fix a parser currently. Is there anyone that could take a peak at the sankaku one to give us an idea what might be happening? I know one of the community members made an alternative sankaku downloader a few years ago (that one also doesn't work from my testing).
>>17957 >Hmm, this generally isn't supposed to appear in any case. Normally, when a trashed file leaves the trash, all its thumbnails are removed from view. You have to do some advanced stuff to see these files normally. Could this session have been an old backup or something, loaded from before when the thumbs were deleted? >Could I have made it more obvious somehow that these files were permanently deleted? I know that stuff is buried in the right-click menu labels, but something more obvious? Maybe instead of the trash-can icon, I could show a more permanent 'this is not available any more' kind of icon? I didn't restore from a backup if that's what you're asking. I am noticing though that my sessions aren't saving. For example, I loaded one of my saved sessions last night, added a few queries, shut hydrus down, and got back on and it's like the session rolled back to before I had added the queries. I suspect this is part of the issue.
>>18063 In the meantime, I'm going to start manually saving my sessions before shutting down to work around the issue for now.
(319.71 KB 1000x713 hello.png)

>>18059 >Note parsing works now! All the downloaders have note import options Please don't forget those of us who only use Hydrus offline and need to import/export notes from/to the hard drive.
>>18037 >>18056 I really hope you'll look into making it compatible with W7, I don't want to stop updating but I'm not moving away from the Seven.
So I know how much of a long shot this is, but I just saw https://github.com/kimono-koans/dano ; this calculates a checksum of the internal stream; a hash of the media (and not the file) using ffmpeg. 4chan recently decided to mangle metadata on webms; using this, the file hash changes but the "media" hash remains the same, which means Hydrus could mark it as a dupe and not store it twice. I don't know if this should be re-implemented within Hydrus (because this sounds heavy AF, although you're kinda doing some heavy ffmpeg magic already with thumbnails?), but could this be an optional additional hash? I could see something like a script looping over files with dano, writing the media hash with Hydrus' API. Again, I know, long shot and all, but I really like the idea. Thanks for reading!
>>18067 Reading dano's code, I realized that it basically only does ffmpeg -i <file here> -codec copy -f hash -hash <sha256 or whatever> - Could this be implemented as an optional field (as I know it'd take a while and a ton of resources on huge files) on file import?
>>18001 Is there a way to see which files have alternates without going through four tiers of menus? It'd be nice to have a UI element to say so. Maybe in the infobar at the bottom where it lists filesize, resolution, etc. of selected files, there could also be a part that says "has X alternates".
>>18060 Thanks for all your hard work devanon, but I prefer the old "star" rating... the new one is smaller somewhat so it can be a bit difficult to tell whether the rating was filled in if the colour doesn't contrast well with the background
>>18060 Is the Qt6 version stable and bug-free now?
>>18026 >Thanks, I am going to work on that specific problem this week as part of Qt6. Please give 496 a go and let me know if you are fixed. i did my homework last week and learned about how UI Scaling is supposed to work in Qt, and in Qt6 it is neat. 496 fixed the issue. Thanks, Dev!
The laptop I use for Hydrus was a generation before they added hardware acceleration for SHA. Since Hydrus uses SHA256, how much of a performance increase do you think I would get if I was using a computer with SHA hardware acceleration?
Where do we find the logs for when a gallery downloader says a domain has several serious errors? Sorry if this is a stupid question
>>18067 Damn, I wrote an entire post that was basically >>18068 before realizing someone else already said it. I think it would be a good idea, maybe use murmur3 to avoid confusion with sha256 which is already used in Hydrus. Just throwing out the checksums on videos/animation files and using the media hash instead could cause confusion and make it harder to search for the files online, so I think these media hashes should be used to detect exact duplicates of video/animation files.
>>18032 Not yet. I hope to have more file duplicate and file alternate viewing options and UI in general the next time I do a big push on this system. This is highly requested but will be a huge job, so I expect it to happen end of this year or 2023. >>18036 Don't worry about the 512MB rule. That's a just a little thing to make sure your client processes the PTR in smaller bursts, rather than trying to do everything at once. Just let it catch up at its own pace over the next few weeks. And yeah it'll be something like 8GB, so a couple weeks of 512MB at a time to download. The stuff you download is compressed, and the stuff you process is duplicated to increase search speed, so it gets bigger than what you download. Bear in mind you are downloading and syncing with 10 years of updates, so don't worry if it takes a month to get there. Just let it do its thing. Warning however: We are now up to 1.6 billion mappings on the PTR, and it is probably 65/70GB of db space. If your SSD is tight on space, you might like to pause your PTR processing once it is, say, 50% through, and only resume it when you move to a newer computer/drive/whatever. The 'permanent' part that sticks in your database is about 8GB or so. I will have tech to remove this in time. >>18037 >>18039 >>18066 I don't have a Win 7 machine to test on, so I am mostly working off some forum posts I read a while ago. If hydrus runs ok on Win 7 with some VCRedist packs or whatever, that's great. I'd love to hear from any users in this situation on how they do, and if there is a guide to get it to work, please let me know and I'll update the help. That link looks neat, so if any Win 7 anons want to try it out, please do! Another user told me 'Debian GNU/Linux 10 (buster) is too old' also. And I think "Additionally, Qt 6 has dropped support for Windows 7 & 8, macOS 10.13 & 10.14, Ubuntu 18.04 and all 32-bit operating systems." in my notes is from the official site. When I do this switchover, I will update my help to talk about it, and hopefully make it easier for users to run from source, sticking with Qt5. They should be good to keep getting hydrus updates as long as we never critically need something that is Qt6 only (I'd try to avoid this if reasonable) or if other libraries like OpenCV start also eliminating older OSes.
>>18038 There is a method to dispute a tag on the PTR, and a janitor team that processes those petitions. Those deleted tags are saved and shared to all clients so you won't upload the same bad tag over again. I'll be doing some work in this area in the coming months. It will be my new 'big job'. I want to add tag filters and things to the admin workflow so we can start shaping the PTR's tags a bit more and clear out some of the crap more efficiently. >>18040 >>18041 Thanks. It has always been a funny site, and some of the parsers were similarly loopy at times as they tried to fiddle with the rules. Let me know if I can help with anything else in future. >>18042 Awesome, thanks for letting me know. >>18043 This is very useful, thanks. Yeah, that profile looks fine. Looks like it spent a few minutes waiting for some gonzo database maintenance job to finish, but then was ok. I do not need your backup, but thank you for the offer. I will do some investigation on my end. As you say, maybe there is a critical threshold it exceeds where suddenly some swapspace isn't enough to get over the fragmentation or whatever. SQLite have been rolling out some fairly neat granulated optimisation routines in the past couple of years. I'm hoping they figure out a better incremental vacuum (they have one now, but it isn't great), so I can just do a second of work every five minutes or something and keep things optimised all the time, but we'll see. >>18044 >>18046 I know a couple of users with more than ten million files, so it is usually ok, but when you get near those sizes, some of the systems (particularly UI) get stretched and you'll get a feel for what sort of searches you shouldn't try to avoid lag etc... It is a shame to hear the API can't handle sustained imports. It sometimes falls over when it gets hammered, like it can't find the time to garbage collect and clean up after itself. The Hydrus Companion sometimes runs into the same thing when there are a billion URL status requests it needs to look up. My suggestions are: - With that many files, see if you can split them into grouped batches. I'd generally say hydrus is comfortable with anything up to, say, 25k files. That's when most lag starts to really kick in. 100k is a decent hard limit, and 500k starts to push at various limits like the max size of a page that can be saved to the database for your UI session. - Hydrus is a marathon, not a sprint. We often don't understand how big a million really is. I also often have trouble convincing new users to let the PTR sync slowly, over a month or more. But if you expect to be using hydrus in five or ten years, don't sweat a slow integration. If it takes two months to import twenty years' of files, that's a ratio to be happy with. - Make sure you like hydrus before you commit! I'd hate it if you work really hard importing everything quickly and then discover in three months that hydrus isn't for you. Please make sure you like the program with 400k files first. - If you can, put a little pause in your API downloader. Try a second of pause every ten files, and ten seconds every hundred. That may help hydrus take a breather and free up memory and other resources from all the imports. - The best tool to use for mass imports is the 'import folder' feature, under the file menu. That imports inside the client without much UI, so it won't be trying to display 50k+ thumbs in one page. Try batch feeding your import folder 50-100k files at a time. Sorry if I sound condescending with any of that, I don't mean to. Just some of my general experiences in getting into hydrus. Slow and steady wins the race.
>>18045 It is complicated because there are many systems, but I think it increases in roughly (log n) time. So let's say it takes 2 time units to process something into a database with 10m items, it might take 3 with 100m and 4 with 1b. Some components are (n log n), which is obviously superlinear, but not the bulk of it. Just give it time to catch up, and if you click 'process now' and watch every row, you'll drive yourself crazy. Once it is fully synced, it'll be a few minutes a day and you never normally notice it doing anything. If you are getting at least 1,000 rows/s on your processing time, you are great. >>18048 At the moment, the hardcoded shortcut for that is middle-clicking the tag. I don't have the shortcuts system plugged into the taglist yet, so you can't edit it, but I do want to do this so you can remap everything to whatever you like. I sneak in shortcuts updates when I have a bit of spare time, but I'm afraid I've been overwhelmed for the past six months or more, so those nice 'in the background' jobs are getting lost a bit. I'll get my pace back, hopefully in the new year. Custom menus would be great. Unfortunately they are in an even worse position than the shortcuts. They are all completely hardcoded. Maybe one day I'll completely rework them into a dynamic action system that I can generalise and template-ify and then let you edit it all. Not any time soon though, sadly. Although if you run from source and know python, you can edit my code to do whatever. I know a few users who have little hacks in place to do this sort of stuff. >>18049 >>18057 There's some other shortcuts in there too, including some neat stuff like if you hit up/down arrow on manage tags while the tag input box is empty, it will change your current tag service tab. If you hit page up/down on the empty input on a manage tags that was launched from the media viewer, it will change the current media. >>18050 >>18051 Extremely based solution. This will 'work' for display purposes on existing tags, but any new tags you get will be assigned 'creator' again since that what the incoming text is, so depending on your downloader situation you may end up with doubled-up 'creator:blah' + 'artist:blah' tags and some other awkward stuff. There is a chance that you also completely fuck something important up, like some important search cache, or generally your understanding of the PTR's tags if you sync with the PTR. Namespace storage and searching isn't too complicated at the db level, so you are probably mostly fine. Main suggestion: MAKE A BACKUP BEFORE YOU DO ANYTHING LIKE THIS. Then, if it all breaks, you can just roll back with a clean conscience.
>>18052 Thank you, I did not know there was a render function in the mpv API! I use mpv windows in a very dangerous unstable way, so if there are nicer solutions we can at least play with, and also fix your problem, that would be neat. I'll write all this down and hope I can find some time to get to it in the nearish future. It is really useful having these examples to check out. >>18055 Not yet. Now I am getting note parsing finally working, we are thinking about hijacking the notes system to store translation boxes. I think they are stored in the boorus as JSON, so we can probably just yank that out of the page and save it to a special note name, and then maybe one week I jury-rig the boxes for hydrus too. I'm not sure, since it is pretty complicated metadata, but it is on our minds. For the time being, I need to get note parsing working nice in the first place. >>18061 I am afraid not. This is all pretty much way out of my league (it mostly works at the library+OS level), and any hack a dev tries to do usually backfires. I don't want to touch it either, since I've had rotten luck trying to make nice solutions here. The best solution is to get the hell out of the way of UI scaling and let Qt figure it out. HOWEVER: I know that Qt takes some environment variables. If you were to make a batch file or whatever to launch hydrus, I'm pretty sure you could try to set QT_SCALE_FACTOR, as here: https://doc.qt.io/qt-6/highdpi.html#environment-variable-reference Maybe you can set it as 1? Or maybe you'd have to set it as 0.8, since it look multiplicative to the current scale factor? That's assuming it allows non-integer values. Anyway, I think those envs are what you want. Maybe you can just switch off high dpi support with QT_ENABLE_HIGHDPI_SCALING = 0. Let me know how you get on! >>18063 >>18064 Damn, your sessions not saving is no good. That definitely explains what is going on here. Is your default session set to 'last session', and the auto-save options set up ok under options->gui pages? Something like pic related? Now I realise it, I should say that there is no auto-save tech for any sessions other that 'last session', if there is confusion about that. If you load a different session and make changes, you unfortunately have to save it manually to keep those changes. Sorry for the inconvenience. Downloaders and other pages that change regularly are best kept in 'last session'.
>>18065 Thanks, yeah. First I need to get multiline text parsing working, and then I'll take a little breather, and then I want, when I have a free week, to attack the new import/export system I made for .txt tag import export. Fingers crossed, I will be able to add a 'notes' object type to that and suddenly everything will click into place just like note parsing. Please keep reminding me if it looks like I forgot. >>18067 >>18068 >>18075 Damn, I knew there was something fucked going on with 4chan's hashes recently. I figure it was CloudFlare doing file optimisation again. Do you know if they are actually reporting this new hash in the place of the old file hash in their API? It would explain why hydrus is doing a bunch of 'already in db' file redownloading on 4chan watchers recently. Thank you very much for this info and the library reference and all that. If this is being used in the wild by a big site, it means it is a legit hash. I see the basic ffmpeg routine, and I sort of like it. We might even be able to use it ourselves for some things in future. I can write an extension to the database that stores these hashes and allows them for file lookups, and then add a file maintenance job that generates them for all appropriate files in the background. We've done this before several times on things like perceptual hash and pixel hash generation. This will be a little bit of work, but may be useful. I will write all this down and think about it a bit and then act on this. >>18075 I would call these a special version of sha256 and give them a different column or something in the database, so no big worries about the confusion part. 'media hash' or 'content hash' or something. I can use them in the same logic as sha256 checks though, since the chance of a collision is effectively zero. This will also help some file dupe detection, just like how image pixel dupes do (basically the same concept). >>18069 Not yet, I'm afraid. The last big push on duplicates focused on getting duplicates working logically consistently at the database level. When I do another push, hopefully start of next year, I will add alternates as a properly realised system. I'd love to have some nicer UI and general display tools at the same time. For now they are mostly a black hole.
>>18070 I think I agree, this one is too thin. When I turned on anti-aliasing for the ratings controls (which I also only feel 75% about tbh), it revealed my old star as bullshit play-dough that I eyeballed years ago. I'll see if I can do something about the coordinates to fatten it up and still have it look balanced. >>18071 Pretty much. I've nailed down the biggest bugs that people got, not that there were many, and have the UI scaling work ok now. I will recommend it for everyone, and will update the Win installer to Qt6 next week. If the winds continue to blow in our favour, I'll go Qt6 exclusive in two or three weeks. >>18072 Great, thanks for letting me know!. >>18073 It isn't a big deal. Hash generation is probably only 5% of total import time (stuff like just copying the file to a temp dir and generating thumbnail take far longer), and then it never needs to happen again. I'd guess a couple percent boost in speed because of accelerated hash generation, maybe a few dozen milliseconds per import? I expect imports will generally work a lot faster in general just because of the new chip and other hardware though! >>18074 Hmm, I am not sure if that is well logged. The main log for hydrus is under your db dir (file->open->database directory), called 'client - date.log'. Scroll to the bottom and then search up for 'traceback' or 'connection', maybe you'll get some nice spam about connection errors or similar.
Is there an estimate of just how many people are using hydrus now, or at least the PTR? If you have these numbers, how have they changed over the years? Is adoption of Hyrdrus accelerating? How many decades until it overtakes wangblows explorer as the primary way to look through large collections of image files? 2? 3 tops? >>18076 >If your SSD is tight on space >SSD I wish. But even without an SSD, it's looking like it'll take about 20+ days to finish processing updates. Is it normal for progress displays to stick a lot, and then jump a dozen percents or two when I'm not looking? Sometimes they don't budge all night.
Consider the situation: >fast moving tag (lets say 3 new booru pages of images a day) >someone has uploaded an image without the tag (t=0) >then next day (t=1), my Hydrus tag search subscription updates for that tag. of course, the image doesn't get found >a few days later (t=5), someone else adds the missing tag to the uploaded image. It enters the tag at page 15 because it's already 5 days old, despite just now showing up in the tag search Will Hydrus tag search subscriptions be able to find this 'new' image? Or does it just look until it sees a known-already image and stop?
>>18082 Hey, I am sorry to sound like a bore, but if you are on a mechanical HDD, please pause the PTR now. You will not be able to process it to full and keep up to date. I try to decorate most of the 'set up the PTR' help and UI text with this information, so I am sorry if you missed it. You have to be on an SSD to sync with the PTR, since once you get to 1.6 billion mappings, you need the stable low latency of an SSD to process that much information. Please pause the PTR now and come back when you next update your computer. For estimates of how many people use it, I don't know for certain. I am no longer involved in the administration, so I can't look up total number of anon access keys to estimate either. Hydrus gets about 800-1,200 downloads a week on github, and not everyone updates every week, and there are other ways of getting it, so my estimate for total hydrus userbase is something like 3,000-5,000. I'd guess hydrus users is 50% of people? Not sure. Maybe 1,500-2,500, something like that. I've never calculated proper metrics, but I'd guess hydrus grows 10-50% in userbase a year. It recently had a big bump, in past four months, from 800 to 1,200 downloads a week. The growth is wonderful, but I'm always overwhelmed with work, so it is not my top priority. >>18083 No, it will typically not find it. It keeps searching pages until the last 5-10 files of the page are all things it has seen before and then stops. Most of the time, since most subs try to time their hits so they'll see 1-3 new files each time, that means only the first page is ever checked. Subscriptions don't work super great on very fast image streams like 'blue_eyes'. In general, those streams are not good for humans either, since you'll never be able to keep up with them in your archive/delete filtering. Creator searches, like 'incase', work great. Even the fastest will rarely go above 1 file a day, so you'll capture everything with a normally timed subscription (the 5-day-old tag will still be on page 1), and the quality compared to your own tastes is reliable. If you do have a subscription stream that is like this, with late entries, you can find them by: A) subbing to the same stream on different sites, where the late entries probably won't line up as much, and B) every three(?) months, do a manual gallery download page query for the sub in question to fill in any gaps.
>>18084 >I try to decorate most of the 'set up the PTR' help and UI text with this information I could of sworn I only saw a warning about necessary bandwidth and storage space. Will it kill my HDD, take ages to run, or just flat out crash?
>>18084 >and keep up to date. Also, I don't plan on keeping up to date. Just doing a one time sync, to get a good chunk of my existing files tagged, afterwards I was planning to delete as much of the PTR as possible to save on space.
>>18061 Yeah it looks pretty huge to me too
any way to download twitter videos?
>>18080 >Do you know if they are actually reporting this new hash in the place of the old file hash in their API? Yeah, they're "new" files, this sucks. https://desuarchive.org/g/thread/88040845/ is an archived thread full of anons complaining about this, for good reason. Another recent change is that 4chan recently started allowing for VP9 webms to be uploaded; I don't know enough to say that having a media hash would solve our duplicates issue. A note on SHA256 vs murmur3: as calculating the media hash is a pretty intensive operation, I actually ran a few tests with hyperfine (default 10 runs, shown below). On webms, I've been getting about 7~9x faster times with murmur3. I know you're using SHA256 everywhere though, so you may want to keep it that way for consistency's sake? Just something I thought you might still want to take into consideration; it's pretty significant. First test is a random 3.9M webm: Benchmark #1: ffmpeg -i 3.9M.webm -f hash -hash sha256 - 2>/dev/null Time (mean ± σ): 2.510 s ± 0.029 s [User: 3.774 s, System: 0.052 s] Range (min … max): 2.471 s … 2.567 s 10 runs Benchmark #2: ffmpeg -i 3.9M.webm -f hash -hash murmur3 - 2>/dev/null Time (mean ± σ): 324.1 ms ± 4.0 ms [User: 1.417 s, System: 0.089 s] Range (min … max): 319.5 ms … 331.6 ms 10 runs Summary 'ffmpeg -i 3.9M.webm -f hash -hash murmur3 - 2>/dev/null' ran 7.74 ± 0.13 times faster than 'ffmpeg -i 3.9M.webm -f hash -hash sha256 - 2>/dev/null' Second file is a pretty big 108M webm: Benchmark #1: ffmpeg -i 108M.webm -f hash -hash sha256 - 2>/dev/null Time (mean ± σ): 16.456 s ± 0.187 s [User: 31.695 s, System: 0.175 s] Range (min … max): 16.189 s … 16.801 s 10 runs Benchmark #2: ffmpeg -i 108M.webm -f hash -hash murmur3 - 2>/dev/null Time (mean ± σ): 1.838 s ± 0.022 s [User: 12.467 s, System: 0.396 s] Range (min … max): 1.814 s … 1.891 s 10 runs Summary 'ffmpeg -i 108M.webm -f hash -hash murmur3 - 2>/dev/null' ran 8.95 ± 0.15 times faster than 'ffmpeg -i 108M.webm -f hash -hash sha256 - 2>/dev/null'
Have you considered vertical tabs instead of the horizontal style that hydrus has now? It would make nesting a lot nicer by allowing you to see all branches of the tree instead of just the one you're on which would help a lot with navigation, and it would mean that you can always read the titles of tabs regardless of how many tabs are on a certain row. A tab style similar to the way edge does it would be wonderful, with the tabs in a frame on the left side that can be made wider or thinner, and with a search bar on top and a pin toggle.
>>18089 Pictures and gifs work, but I just get a "Could not find a file or post URL to download!" when trying to download videos, yours included.
(141.18 KB 996x663 works.png)

>>18092 The link I posted works great for me: https://twitter.com/backtolife_2023/status/1559255030791798784 There has to be something wrong with the address you are posting.
(22.89 KB 1124x599 log.png)

>>18093 Moar. Take a look at the log.
>>18093 I'm trying that exact link, still getting the same error on v495
>>18095 Hard to say what's going on. Post error's screenshots plus log.
I've switched from the realtek NIC on my nas's average consumer mobo to a cheap intel x540-t2 network card, and it stopped my nas's network hanging every time I have too many operations going on. The telltale sign it could be an issue is having a network device called re0 on linux/bsd apparently. This isn't a bug I reported here, but maybe other people can learn from it. Hydrus on its own never actually triggered it, likely because of the spaced downloads, so cheers to that.
Is the gallery downloader supposed to stop if it finds 0 new URLs on a page, or is there some setting we can change? My query is stopping way before it should since there's tons of proceeding pages with new images to grab. Under search log, I right click the most recent page that was searched and tell it to try again and allow search to continue, but it just stops again on the same exact page.
>>18078 > the hardcoded shortcut for that is middle-clicking the tag Thanks! I didn't know it existed! Is there a list of these shortcuts somewhere?
This pair showed up on a "must be pixel dupe search".
>>18096 Literally all I'm getting. Log file is completely empty besides the usual startup stuff.
>>18088 use nitter instead
Since I'm a long ways off from being able to replace my craptop with an actual gayming PC that could handle the PTR, what's the best solution for fast tagging in the mean time from other anons with toasters? I have about 30,000 files. At first I was tagging everything meticulously, but after spending two weeks only getting about 1% of my files tagged, I realized I didn't want to spend spend half my free time in the next 4 years trying to catch up with the last 8 years worth of files on this computer. And that's with tagging going relatively fast at the beginning. I continuously slow down as I realize new tags I ought to have placed on already tagged images and I have to double back. Here's my more efficient strategy. >Mass tag everything by folders and parent folders to immediately duplicate the level of search functionality I previously had >Go over every image with saucenao to find the ones that exist on boorus and use the url downloader to auto-tag these images Is this my most efficient option right now? Or can I do something better? Forgive my retardation, but I did start out trying to brute force tag all my files. >>18084 >please pause the PTR now. Processing has ground to a halt at a little over 221M mappings, 2.9M tags, & 11.8M files so I suppose I have no choice. Is there a way to check my images against this fraction of the PTR I already have? Or am I better off just deleting the PTR until I can get an SSD that will process everything immensely faster from scratch? Or was that happening by default as I processed PTR updates?
>>18103 >Is there a way to check my images against this fraction of the PTR I already have? Or am I better off just deleting the PTR until I can get an SSD that will process everything immensely faster from scratch? Or was that happening by default as I processed PTR updates? Never mind this. Upon selecting a large chunk of files at once, I see several dozen have PTR tags already. Following the PTR's tagging patterns ought to help me make my own tags much less haphazard given their curation. I see it auto-checks any new files against the fraction of the PTR I have too. Only took a about 2-3 minutes to run through 65 images. At this pace for newly imported images, even though expanding the PTR has become incredibly slow, I think it might be worth it to keep going until I have maybe one or two more hundred million mappings. I'm going to let the PTR keep syncing and check the processing speed of new images at milestones of about every 50M mappings until I feel it gets too slow.
>>18104 Actually, looking at the PTR tags I've gotten, they're not all that curated. For something with only a few thousand users, I would have thought things wouldn't be this messy already. Lots of redundancy like almost all the medium: namespace tags being pointless since they already exist as system: tags. Character names are often unnamespaced, and even when they are properly namespaced under character: there's redundancy where half of them just use the name, and the other half add the IP name in parenthesis for disambiguation purposes. There's both "1 male", "1 female" and "1boy", "1girl" tags. And there's complete junk tags like "///", "/VV\", which don't return the files tagged with them, probably because of the slashes they use. I kind of figured this would this would be run more like a moderated booru. Does anybody curate the PTR tags? Or is it entirely based on users voting down bad tags and everything not voted on gets a pass? It seems like a lot of junk gets through considering I found a chunk of objectively bad and inconsistent tags when I only have a fraction of the PTR and I've only applied that to about 500 files. I think I'm going to go ahead and just delete the chunk of the PTR I have since things are this messy.
Since deleting the PTR takes so long, I'd suggest adding a progress bar for that.
Say I have a file on a booru with an account lock on it due to a blacklisted tag. I can see all the tags, but not the image. Same goes for Hydrus, so it can't get a hash to compare, then add the tags. However one of the source links on the e621 page is a url Hydrus can parse, like Baraag. Is there any possible way when running into a blacklisted image with supported source links like this, that I can make Hydrus grab the image from the source, and the tags from the booru page? If not, is there some way I can force Hydrus to attach all tags from the url to a file of my choosing?
In the duplicate filter setup page, could you add a way to make certain parts of a search apply to both files, but others only need to match one of them? Maybe with 2 search boxes? I want to only sort through archived files, but I want at least one of the files in the pair to have been imported within the last week. There's no way to specify this that I know of though.
What are my options for speeding up the loading of large files in the media viewer? I'm not quite sure what is making certain files take multiple seconds to load, as even big files in the 50+ MB range can seemingly arbitrarily take anywhere from half a second to more than 5 seconds to load, but in my experience files less than a MB in size never have a noticeable lag before being displayed so its highly correlated with image size. For reference I already have tried migrating my full database to a SSD and closing everything else in the session to minimize session weight just in case either of those would speed up the process, but neither resulted in a perceptible difference in loading times.
Default tag searching of course treats all entered tags as one big AND statement, i.e. X AND Y AND Z tags must be mapped to a file. How do I enter OR statements to return results for X OR Y OR Z tags?
I had two similar files. I deleted one. Now I'm going through every file with saucenao to rip tags/superior versions. I found the file/hash I deleted on a booru instead of the similar one I kept. How do I undelete it so that I can get the booru tags?
I had a good week doing a variety of different work. I cleared out a bunch of little improvements and bug fixes, moved the Windows installer to Qt6, and rounded out the new note parsing system with multi-line parsing support. The release should be as normal tomorrow.
>>18111 Nevermind. I found the undelete function in the shortcuts.
https://www.youtube.com/watch?v=0KrpZMNEDOY windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.Windows.Qt6.-.Extract.only.zip Qt6 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.Windows.Qt6.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v497/Hydrus.Network.497.-.Linux.Qt6.-.Executable.tar.gz I had a good week working on a variety of different things. The Windows Installer is now Qt6. You do not have to do anything special to update, just do your normal routine. If you are a Windows 7 user, it is very likely you will not be able to run it. You can use the Qt5 extract release for another week or two, but please plan to either stop updating hydrus, move to a newer OS, or consider running hydrus from source. highlights I bulked out the 'star' rating shape. The pentagram was a little thin, so I've fattened it back up while still keeping the coordinates good. If you liked the thinner star, you can now set it explicitly as a new shape choice under services->manage services. Manage tag siblings/parents now has a proper delete button, which should make removing many rows at once easy. I did some more note parsing work this week, and I updated the Hentai Foundry downloader to grab artist description notes. If you download from HF, you should see new files get notes. I would like to slowly update most of the client default downloaders with note parsing support, so let me know where all this succeeds and fails, and I'll adapt things as we go. Since we are doing more note work, I also improved the size calculations for the media viewer's note hover window. It still isn't perfect in all cases, but it'll clip the last line of text less often. New clients now start with ctrl+page up/down as 'move page selection left/right'. multi-line parsing This is only for advanced users who make downloaders. The parsing system now has basic multi-line support. Any formula on a 'notes' content parser or the formulae that do subsidiary page parser 'splitting' will now no longer remove newlines when it grabs text. It makes it possible to parse a multi-paragraph artist comment and have it all end up nicely formatted in the file note you end up with. I have hardcoded in some additional formatting rules, too. When you select 'string' as the parsing result from a nested tree of html tags, I now insert newlines on 'p' or 'br' tags. I also strip leading and trailing whitespace from every line of a note, and I only allow two consecutive newlines to clip very large paragraph breaks. The main remaining frustration is the string processing UI has mixed multi-line parsing support in its testing UI. Some of it shows and converts well, but most of it collapses multi-line test content down to a single line. I have updated the Hentai Foundry parser this week to grab notes. It is ultimately pretty simple, if you want to check it out as an example. I'd like to know what is most frustrating about this. I would like to chip away at the string processing test UI (and any rules that simply do not work well on multi-line content) in future.
full list - misc: - I bulked out the 'star' rating shape a bit more, since the new pentragram, while it looked better than my old 'by-eye' star, was a bit thin. if you prefer the pentagram, this is now selectable as a new shape type under manage services - the Windows installer is now Qt6 exclusively. there are no special update instructions, it should all just work™ - the 'manage tag siblings/parents' dialogs now have explicit delete buttons, which should make mass-deletes a little easier to do. some of the background code is cleaned up too, and the 'add' button is moved up to the main button row - you can now hide all sibling and/or parent text-suffix 'decorators' in the manage tags and autocomplete dropdown taglists, with four new checkboxes under _options->tags_. the right-click menus of these lists let you temporarily show/hide too, just like 'hide/show parent rows' - when you change the namespace sort in the options, the existing collect-by dropdowns now update instantly (previously, existing pages needed a client restart to see any changes) - I updated how the media viewer 'note' hover window lays out and does its 'how tall should I be?' estimate. it fits better, being exactly just tall enough in more cases, but it still seems to have trouble with multiple notes that include wrapping text - added a link to the new flatpak release (easy Linux running-from-source setup) that a user made to the install help - fixed an issue with the new 'default' file import options when you right-click a watcher/gallery download--the 'show files' menu now correctly adapts to you having a default file import options - if you are set to elide page tab names, then all pages will tooltip their names on mouseover - new clients now start with (ctrl+page up/down) as 'move page selection left/right' - . - client api: - the Client API routine that fetches file statuses for a given URL no longer double-checks 'already in db' results against your actual file system. this check is more appropriate to an actual working import process, so it now defaults off in the Client API - if you want to do this check because you are searching for missing files, you can turn it back on with the new 'doublecheck_file_system' parameter. - the client api help has been updated to reference this - the client api's Server header is now "client api/32 (497)". NOT "client api/17". it was stating the hydrus network version erroneously. it now states client api version and software version. if you are able to parse this header, it makes '/api_version' request superfluous - the client api version is now 32 - . - multiline parsing: - the parser now supports limited multiline parsing. the main changes are hardcoded: the formulae beneath note content parsers and those that do subsidiary page parser splitting no longer remove newlines when they parse. all the parsing UI and the test panels and so on are now aware of this and set flags in all the right places, and parsed notes are now washed through the new trimming/cleaning method, and everything _seems_ to basically work. the main remaining problems is the complicated string processing UI has mixed single/multi-line testing support. some looks great, most gets coerced to single-line just for the previewed test results - as an example, the default hentai foundry downloader now grabs the artist description as a multi-line note - the parsing sub-system that extracts cohesive strings from complex html blocks now inserts newlines at 'p' and 'br' tags - trying to parse clean multiline notes still caused several formatting issues this week, so I have updated the automatic note-washing routine to standardise hydrus notes in several new ways that I hope will not be too disruptive to manually written notes: - the note washing routine now coerces all newline characters to 'backslash-n', regardless of platform - the note washing routine now trims each line, so no leading or trailing whitespace anywhere. I am open to changing this in future, maybe for handwritten notes where you really want an indent somewhere, but parsing from complex nested html tags is making a heap of weird extra whitespace, for which this is a clean solution - the note washing routine now trims newline gaps that are greater than two-newlines. you can split paragraphs by one empty line, but no more - there may be other issues figuring out cleanly formatted strings from nested html tags--so give it a go and let me know what you think. maybe p and br blocks should always make two newlines, so we have separated paragraphs, maybe I need to parse more blocks, like h1 and friends. any specific example html blocks would also be helpful - . - cleanup: - refactored ClientGUIParsing to its own 'parsing' module and split everything into four less tangled files - cleaned up a bunch of taglist text presentation code, mostly simplicity and clarity in prep for future updates - updated the checker options button to use a Qt signal instead of a callable next week I have more small work like this to catch up on, including github issues.
I sometimes get this error for certain gelbooru urls, 522: The server's error text was too long to display. The first part follows, while a larger chunk has been written to the log.… (Copy note to see full error) Traceback (most recent call last): File "hydrus\client\importing\ClientImportFileSeeds.py", line 1329, in WorkOnURL self.DownloadAndImportRawFile( file_url, file_import_options, network_job_factory, network_job_presentation_context_factory, status_hook, override_bandwidth = True, forced_referral_url = url_for_child_referral, file_seed_cache = file_seed_cache ) File "hydrus\client\importing\ClientImportFileSeeds.py", line 433, in DownloadAndImportRawFile network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1914, in WaitUntilDone raise self._error_exception hydrus.core.HydrusExceptions.ServerException: 522: The server's error text was too long to display. The first part follows, while a larger chunk has been written to the log. <!DOCTYPE html> <!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]--> <!--[if IE 7]> <html class="no-js ie7 oldie" lang="en-US"> <![endif]--> <!--[if IE 8]> <html class="no-js ie8 oldie" lang="en-US"> <![endif]--> <!--[if gt IE 8 It went away when I came back later and tried again some hours later. While I was continuously getting this error for a few urls on gelbooru, others on gelbooru would work just fine.
I've noticed you can't open similar files of videos and gifs. However, similar videos and gifs usually have the same thumbnails. Could you add the feature to compare thumbnails for gifs and videos so you don't have to rely on a pair of animated files necessarily already being properly tagged in order to find and compare them? Saucenao can find similar gifs just fine.
Is there a quick way to permanently delete everything in trash without confirmation? Like an 'empty trash' button? Right now I have to: 1. go to the trash 2. search 'system:everything' 2.5. wait for everything to load 3. select everything 4. hit del 5. click on apply. I would like a shortcut or a menu item that simply empties the trash without having to go through all of these steps please. Thank you dev god.
>>18118 Dunno if you can, but have you tried making a shortcut for it to cut out some steps? Custom shortcuts are saving me a good bit of time and I've only added one >Ctrl+s: Show similar files with hamming 8 (speculative)
>>18119 Yeah I've looked into it and the closest I can find is the 'delete' shortcut which simply sends a file to trash. It's the step 4 in >>18118
I'm probably just being an idiot, but trying to use the furry.booru.org GUG and booru.org downloader for a tag search downloader, I'm getting a 403 error. Any ideas?
I have a fancy mouse with extra buttons. I can't seem to bind any mouse buttons besides the following: >left click >right click >middle click >scroll up >scroll down >forward >backward How hard would it be to add more? Clicking with buttons not listed is registered by the dialog box as a click but Hydrus does not change from the last valid button. Scroll left/right is also not recognized.
I've got a significant amount of same quality/PFP duplicates. Is it a fair strategy to keep the "best" files in my main file service and move the other files to a separate file service for duplicates only, to keep the latter from showing up in searches on the main service? Another thing about duplicates and metadata merging with default merge options, if tags are updated for, say, a worse duplicate, are the changes copied to the best one, even way after the file relationships were first set?
>>18123 Why would you keep a bunch of same quality duplicates?
the archive.moe downloader is broken. All file urls return a 403 error. I checked the source urls and it says something like "The owner of this website does not allow hotlinking to that resource"
>>18124 Tags and other metadata
>>18126 Not sure what metadata is so important, but why not just copy over all tags to your main image? That's what I'm doing.
>>18110 Nevermind, I found it. In the getting started guide it's mentioned that you can only see it in advanced mode. The getting started guide also has a hyperlink for it, but it leads to a 404. I feel like there should be mouseover text or something though saying that advanced mode may need you to restart Hydrus. Options for advanced mode won't appear on already open pages unless you restart the program. Does the OR function really need to be limited to advanced mode when it's already separated into regular OR and advanced OR*?. Seems like a pretty basic function that's practically necessary for competent searching if you have more than one tag pool, i.e. downloader tags, my tags, and PTR tags.
>>18085 Sorry for the confusion. 'take ages to run' is the problem, and 'kill your HDD' as a soft secondary simply because it'll eventually be using it 100% for that time, which isn't healthy. Basically, once the size of the database exceeds your ram size, hard drive latency suddenly becomes very important. SSDs are about 0.1ms, HDDs about 8ms. Eighty times slower to fetch a random thing, multiplied by tens of billions of accesses, just makes it not feasible. You'd be spending an hour of heavy processing every day just to stay up to date. Since you >>18086 want to just do a one-time sync, I'd say let it rip for a couple days, probably up to 60% synced or something. Once the processing speed dips below 500 rows/s, that's when things will really be crawling. Pause it there, exit the client and make a backup, make a new local tag service (called something like 'PTR snapshot') under services->manage services, and then play around with tags->migrate tags to save tags for your local files to that new separated local tag service. >>18090 Jesus. Thank you for this info. In terms of which hash to use, since 4chan still seems to talk in MD5, I guess we are generating it for that anyway. I am partial to SHA256, and as you say that's what our infrastructure already is and it comes with stock python, so I'd probably stick to it for any hydrus-specific stuff. I'm not too worried about calculation time, since it is usually massively dwarfed by I/O loading time anyway. It is neat that murmur3 is so fast though. I guess this hashing is taking so long since it is hashing rendered pixels, which must be a huge amount of data. I did a couple of timeit tests in python, and I got about 56ms to SHA256 4MB of random data in python on my weak office dev machine. If that 4MB webm is rendering into 13 seconds of 30fps 720p or something, that really could be an absolute gonk amount of information to hash, like a gigabyte or something, and hence your 2.5s number. This might be technically unfeasible for a 3GB 4k@60Hz video. I have this all written down and will think about it. Maybe we suck it up and see how many other sites start using this format. >>18091 Try the 'notebook tab alignment' option under options->gui pages. You can put it any side you like. I can't promise it will look nice, since I think Qt renders it in a funny way, but let me know if anything doesn't work!
>>18129 >Sorry for the confusion. Not your fault. I looked at it again. It's clearly stated I need an SSD, and I just missed it. I am entirely at fault and incompetent. >Once the processing speed dips below 500 rows/s, that's when things will really be crawling. It had already dipped for days. Usually it was anywhere from 10-400. I've found the quality of PTR tags not really worth the effort though, and have removed the PTR for now until that distant point in the future when I can afford a real computer.
>>18098 Yeah, a pretty general rule for hydrus downloaders is they have automatic brakes, and the main trigger for that is if they run into a whole page of something that has nothing in it (and in the case of subscriptions, nothing new in it). In almost all cases, a gallery page with no files signals the end of the search. Sometimes it is an empty page, sometimes it is a duplicate of the previous one, sometimes it is a pseudo-404 page. If you think that search page should have files in it, try right-clicking on the entry in the search log and hit 'open urls' to see it in your browser. Might be you have hit a limit like the sankaku thing where they won't give you more than 500 files or something (25 pages?) unless you are logged in. >>18099 Unfortunately not. I have a multi-year project to slowly convert all the hardcoded shit to file->shortcuts, but all these hardcoded things are obviously not on there yet. You usually can't break anything with a single click, so don't be afraid to double- or middle-click anything that looks like a custom widget. The multi-column lists in the program, will usually do an edit or a delete on double-click. I've usually tacked that stuff on when advanced users wanted a fast way to do something clever. >>18100 Thank you for this report. This is obviously not correct, and when I import them they do not show up as pixel dupes, so this was a more complicated problem specific to your client. Since they are 'exact' distance dupes, it sounds like either some pixel hashes got mixed up, or the search logic failed. Can you say anything more about the search you did that produced these two? Was it something simple, like 'system:everything + must be pixel dupes', or was it significantly more complicated? Now I think of it, there's a queer problem here if one of the files has duplicates of its own. If one of those duplicates is a pixel dupe, I think that can cause the match, but the 'best file' of the duplicate group is used in the filter to match files together, so you actually see a file that doesn't match the search conditions. I forget the state of this problem, but I remember if was complicated. Could you check for me if the larger, higher quality file has any other duplicates? Presumably one with 1,500x1231 size. You should be able to see the current duplicates under thumbnail right-click->manage->file relationships->view->duplicates.
>>18101 Can you please check network->downloader components->manage url class links? Is 'twitter syndication api tweet' set to 'twitter syndication api tweet parser'? Under the 'api link review' tab, is 'twitter tweet' linking to 'twitter syndication api tweet'? Have you done any downloader importing, i.e. any new twitter downloaders? I'm wondering if one of the objects got confused here somehow, so you've got a new/old twitter parser trying to pull video. >>18103 >>18104 >>18105 I also find manual tagging really difficult and slow to do. I tag some memes to the PTR, and series/creator tags of stuff I care about that isn't tagged, but mostly I just set up some 'this is really good' and 'this is good to post in a thread as a reaction' and 'read this later' and 'tag this later' style binary 'like/dislike rating' services under services->manage services. These you can click on/off with one click in the media viewer top-right hover window. I set them as I do archive/delete filtering. Note you can do 'system:number of tags' and then 'system:has tags' to quickly search for files that have tags on the PTR. Set the tag domain from 'all known tags' to 'public tag repo' to focus down just on that tag service for the search. 'system:has tags' should run pretty quick, too. The PTR processing works in chronological order, and it has had exponential growth, so I would guess that 221M mappings is probably showing you a snapshot of its state as of four or five years ago. We are now 1.6 billion tags. So, many older booru files will have tags, but anything since 2017 will probably have nothing at all. There is a curation team, and they've done great work since they took over from me, and you'll see more of that work as you process more and 'catch up' with the modern state of the PTR, but hydrus has always had messy tags simply because we rely on automatically parsed tags so much. A small problem in a parsing firehose can make a mess in a lot of different places. My 'big job' for the rest of this year is to enact some very long awaited workflow improvements for the janny team. I like to think of it as playing around with magic wands--we can grab a hundred million tags without too much trouble in hydrus-land, but you wave it around slightly imperfectly, or someone ESL misunderstands some guidance, and suddenly you have 'source:' or 'clothing:' or shadow->shadow_the_hedgehog coming out your ass. It is a constant fight. Now I think of it, if you know what 'siblings' are, you might like to pause mapping processing under services->review services but allow parents and siblings. That'll catch you up (on a delayed basis--siblings are complicated) with a couple hundred thousand tag replacement rules that'll fix a lot of that '1boy' garbage. Let me know how you continue to get on! >>18106 Thanks. It is an unfortunately difficult problem, since 99.7% of the delay here is in one database operation that has no feedback itself. I just surrender a thread for 15 seconds to 120+ minutes, and then I get it back when it is done. I have looked into clever alternatives, but most would take me too long to develop. It just sucks to delete a big service right now, I'm afraid.
>>18132 >someone ESL misunderstands some guidance, and suddenly you have 'source:' or 'clothing:' or shadow->shadow_the_hedgehog coming out your ass. It is a constant fight. Quite understandable.
>>18107 Yeah, should do, but only if you have seen the file before. If you did it in this order: Get the file from e621. Get the file from the other booru. Then on the second step, it'll do this: - Grab html page - Check for sources, found one on e621 - Ok, I have seen this file before, it has this hash. I have it! - Do not need to download the file, just need to apply outstanding content updates. - Apply the tags I parsed to the file hash I already figured out - Mark myself 'already in db', and I'm done. HOWEVER having said that I am not sure it will be happy if there is no download link at all. If the download link went to 403 or something, that'd be fine since it would never pursue it, but if it couldn't parse anything, it would probably dump out after the 'grab html page'. If you give this a go, let me know how it works for you. I may be able to tweak the logic to help out. I probably can't do 'pursue to the source URL at e621' just yet, since the current downloader engine has strict rules about not pursuing source URLs (it can lead to dangerous loops that it isn't ready to handle yet). >>18108 Too complicated to do right now, I'm afraid, but I know what you mean, and it would be nice to have. The master database query for similar files search is a nightmare, it has something like 7-12 table joins based on the complexity of what you ask for, so I don't want to dip into it for a little while. The bigger priority for duplicates is to let you fetch files for the filter in different orders and begin the early optional stages of automatic duplication resolution using pixel duplicate data (basically, if you have a jpeg and a png pixel dupes, you'll be able to auto-choose the jpeg as better).
>>18134 >Get the file from e621. No. e621 has the account block for fringe content. I do not get the file from it and neither can hydrus. I get it from the source linked on the e621 page with the tags, the source being Baraag. But I can't get the tags, since hydrus can't verify the e621 page as having the file. It gives "Could not find a file or post URL to download!" and ignores the url. e621 is the only booru with tags for this image though. Best I can do right now is manually copy all the tags off the page and then click the "paste newline separated tags from the clipboard" button. But those are all unnamespaced, and it grabs all the "total number of this tag on the booru" numbers from beside each tag. >it can lead to dangerous loops Could you make hard stop at only pursuing once per url entered? Or am I misunderstanding the issue here?
>>18116 522 seems to be a Cloudflare-specific 'connection timed out' error. https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#Cloudflare I guess gelbooru, or at least the URLs on gelbooru served by a particular server, or something that Cloudflare hadn't cached and needed a live hit on, were currently offline. Anything 5xx tends to be 'the server had a problem'. Hydrus tries to retry problems like this, with a slow-and-slower retry timeout, usually up to five times, but it can't fix longer timeouts yet. I'd like to have some nicer 'try again in four hours' tech in future. >>18117 The very first version of the duplicate system worked with video thumbnails. It worked ok! Unfortunately, it was swamped by false positives and 'black frame' and similar false positives in particular, so this needs to be slightly more complicated for our purposes. The good news is I have a plan to do comprehensive video duplicate discovery. We did a limited test with the tech (basically it'll be taking frame samples throughout the video and comparing those across files), and it worked, so I feel great about the idea. It will take a lot of time to do, so it will be one of my big jobs, currently #5 in popularity here: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc It should be able to match short webms with bloated gif re-encodes. >>18118 >>18119 >>18120 Hit up services->review services, go to the 'trash' page, and click 'clear trash'. It'll happen in the background over the next few seconds. >>18122 This stuff can be tricky, since I am at the whim of what Qt will pick up. Support for more complicated commands often happens in a different way to raw key press events, but since we are moving to Qt6 right now, I will have a fresh look at the different ids I can check. Please watch the changelog for experimental support and give it a go--if it works, or doesn't, please let me know. >>18123 I'm not the biggest duplicate clearer personally (I'm still chipping away at my archive/delete queue, and expect to keep at it for years until I can train an ML with my taste), but when I do process dupes, I always delete the worse quality. My problem is always having too much content, so the cleaner and easier that I can dispose of old/bad things, the better. I want to look at a nice selection of pictures of cute elves, and when I ponder about their ideal taxonomy, I tend to end up in rabbit holes that waste time and involve a lot of trudge work that doesn't involve looking at cute elf pictures. Content merge is not retroactive right now. If you add a tag to a 'bad' file, it won't get merged into the 'good' file even if you previously set them as dupes with a content merge. The merge is one-time at the time of the merge. HOWEVER, hydrus remembers the duplicate relationship, and it remembers all the tags, even for files you deleted, so I will be able to write retroactive merge tech in in future, and we'll one day, with luck and a lot of work, have 'synchronised' merge. No need to keep the file around, if you would like to do this--hydrus remembers all the metadata and hashes, it doesn't need the actual file.
>>18135 Damn, yeah, I guess hydrus is going to throw a fit early because there's no file URL. I will examine the logic here. I feel like I may be able to rejigger the order in which it checks things and only protest about a missing file URL if it hasn't already got an 'already in db' result. >Could you make hard stop at only pursuing once per url entered? Or am I misunderstanding the issue here? Not easily. The downloader is pretty stateless for each individual download item. I'm not ready to go in that direction yet, since it only adds complexity and the underlying systems are already strained. In future I want the downloader to make a lot more clever decisions and have more state, trying different routes based on 404 and stuff, but not yet I am afraid.
>>18136 >'black frame' and similar false positives I thought as much would be an issue, but "blank" video/gif thumbnails seems like a non-issue to me. So what if I can't find dupes for the blank ones? I just won't search for dupes for those. Even if you don't find a fix for the blank 1st frame issue, allowing the function would still let people find dupes for every animation that doesn't have blank first frames. >The good news is I have a plan to do comprehensive video duplicate discovery. We did a limited test with the tech (basically it'll be taking frame samples throughout the video and comparing those across files) Sounds it might be more processing intensive than just thumbnails, if much more accurate. >It should be able to match short webms with bloated gif re-encodes. Impressive. Not even saucenao will handle searches for video files, even though it does gifs just fine. >Content merge is not retroactive right now. If you add a tag to a 'bad' file, it won't get merged into the 'good' file even if you previously set them as dupes with a content merge. The merge is one-time at the time of the merge. Oh fuck, is this a much faster way to move tags from one image to another? I've just been selecting both dupes, opening the tag editor, highlighting all the tags on the inferior dupe, and then double clicking to add them all.
>>18138 >Oh fuck, is this a much faster way to move tags from one image to another? Aha, it is. And is easily set to a shortcut for same quality dupes. There's also for better quality dupe, and king dupe. What's the difference between these, both each other and the former same quality dupe? >Choose better dupe option by right clicking on the better dupe with two images selected >All tags moved the inferior dupe and removed from the superior one What? >Deselect all >Double check with right click >Says the now tagless dupe is set as the best quality dupe of the group I clearly don't understand this function. As far as I can tell, since I always delete lower quality dupes, I don't see a reason not to just use the same quality dupe option every time and then just delete the lower quality ones. Forget everything I just said. I need to spend some time learning how to use the duplicate processing page. Up until now, I've just been following this pattern >Import a ton of images from folder with filenames as tags >Tag the whole import at once with a tag (or a few) based on the folder it was in >go through the original folder with saucenao and import every image that exists on a booru >shortcut open potential dupes of hamming 8 on any booru import that wasn't already in db >Move tags to whichever file is superior >delete inferior file >close dupe page >repeat last 4 steps The dupe processing page seems like it will speed this up greatly, and setting file relationships to alternates seems like it will defeat the need to tag any non-ordered set as set.
>>18132 >Under the 'api link review' tab, is 'twitter tweet' linking to 'twitter syndication api tweet'? No, it's linking to nitter for some reason.
>>18136 >Please watch the changelog for experimental support and give it a go--if it works, or doesn't, please let me know. Nice, I will. Mapping Forward to keep and Back to delete in the archive filter has already been a huge time saver, it would be nice if I could also cram some duplicate filter bindings like toggle A/B and go back a pair on my mouse as well since I have the extra buttons. Then I could have my other hand completely free to further increase my research efficiency. I guess I could use completely different mouse shortcuts per context, but that seems confusing.
So, new QT6 version crashes when I try to quickly browse tabs, so that's cool. Nothing in the log about it at all, but this appeared when I started it back up. v497, 2022/08/28 06:42:10: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:42:10: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:42:25: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:42:25: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:42:44: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:43:20: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58. v497, 2022/08/28 06:43:59: QWindowsContext::windowsProc: No Qt Window found for event 0x1c (WM_ACTIVATEAPP), hwnd=0x0x980e58.
Random small bug, "Close media viewer" shows up twice in the shortcut commands for the duplicate filter. Doesn't bother me since I set that action in the all media viewers group though. Also, not sure if this is a Hydrus issue, but seeking media files with mpv as the viewer on Linux has been pretty buggy for a while now. Many files (especially GIFs) don't allow me to seek at all, just jumping back to the beginning, and others only have a few points that can be sought to which varies per file. My guess is codec bullshit. Using the native Hydrus viewer works much better with regard to seeking, does using it have any downsides? While I'm here, I would really appreciate if the shortcuts system got an overhaul where all of the hardcoded shortcuts were exposed so that they could be changed and/or other buttons could be bound to them. I understand there are bigger fish to fry, but there are people out there who would appreciate a shortcut update someday. Either way, keep up the great work, dev!
>>18143 >does using it have any downsides? No audio. I set mine to use the native viewer for preview videos and gifs which don't have sound anyway. I use mpv for videos in the media viewer for sound. Works pretty well, I find preview audio distracting anyway.
>>18136 >Content merge is not retroactive right now. If you add a tag to a 'bad' file, it won't get merged into the 'good' file even if you previously set them as dupes with a content merge. The merge is one-time at the time of the merge. HOWEVER, hydrus remembers the duplicate relationship, and it remembers all the tags, even for files you deleted, so I will be able to write retroactive merge tech in in future, and we'll one day, with luck and a lot of work, have 'synchronised' merge. No need to keep the file around, if you would like to do this--hydrus remembers all the metadata and hashes, it doesn't need the actual file. Excellent, that's exactly what I needed to know. Thank you for all your hard work, Dev-kun.
>>18136 >video duplicates A quick and inaccurate version of finding video dupes would still be helpful in the meantime. Like take a single frame 5% into the video and compare that. Even if it's imperfect I could still save a lot of space because my db has 11257 videos at 134GB :P And I know there are lots of simple dupes like different resolution of the same video, and audio vs no audio.
A downloader that I made a while back recently stopped working. I managed to fix it as its site's API changed and I only had to change one parsing rule, but now downloaded files no longer get post/direct URLs added to them like they used to. What do? I can post the half-working version if necessary.
Hydrus seems a bit prone to crashing after I upgraded from v490 to v497. ("This program has stopped working") Mostly seems to happen when I open some dialogue (options, shortcuts, etc.) and there is some work ongoing (duplicate searching, importing). There doesn't seem to be a crash log generated anywhere, and the db log contains no errors for the crash. Only place where it is logged is in Windows' event viewer, and I'm not sure the content of that going to be very useful. Is there anything I can try to see if it helps, or is there a way to tell Hydrus to gather debugging/crash logs explicitly? I think will try a fresh re-install, at least.
>>18148 Fresh install did not help at all. Hydrus seems to be crashing haphazardly when I interact with it.
>>18121 furry.booru.org requires you to have some cloudflare cookies. you have to open the page in your browser and then copy the cookies over and then set the user-agent to be the same as your browser. i think the hydrus companion extension can do this automatically. >>18135 gallery-dl can download blocked files from e621, i think? if not, it can still grab metadata like tags and write them to a json file, and then it's possible to write some python and use the hydrus API to add the tags to the files. >>18147 post it
(19.46 KB 681x472 hydrusnetwork.PNG)

Ive put the links but it doesn't download the files. Does anyone know why?
(10.39 KB 391x312 hydrusnetwork2.PNG)

>>18151 When I add a download page it returns me this error.
>>18152 go to network > pause and then uncheck the stuff that is paused
>>18153 Thanks for the help, it worked. :D
>>17908 >- fixed a problem that made saved page file sorts reset their sort order one time on update to v492. thank you to a user for noticing this and discovering the fix, and I'm very sorry for the inconvenience of changing your session and favourite search sorts. unfortunately there is no easy fix other than rolling back to a backup and jumping forward to this version I just tried updating from v488 to v493, and this doesn't seem to be completely fixed. I had two url download pages, one of which is over a year old and another that is a couple months old. The newer one remembered its order, but the older one got reset. If anyone else encounters this issue, I found a way to solve it. I rolled back to v488, selected all the files in the old download page, and clicked open in a new page. Then I updated to v493 again. The old download page still got its order reset, but the new page I opened with its files did not. So then I just removed all the files in the old download page and moved all the files from the new page to the old download page.
(7.25 KB 460x182 permanent booru v2.0.png)

>>18147 >>18150 >post it I took a closer look and found that the part which associates URLs was broken with the API change as well, fixed it. Here's the updated version that seems to work from my testing. It's for the Permanent Booru.
>>18155 >I found a way to solve it Scratch that, if I close and restart hydrus it resets the old downloader page again. The real way to solve it is to just create a brand new downloader page and copy all the thumbnails over, then update. Which is fine for me, I didn't really need all those old URLs to still be in there anyway.
>>18129 >Try the 'notebook tab alignment' option that's not really what I mean. It looks like that just takes the tabs and rotates them 90 degrees, even the text. I still want the tabs to be upright, but just to the left like it is in edge. A little search bar above them would be cool too like edge has it. This is probably a feature that would have to be created, but I think it would fit hydrus very well given the poweruser heavy nature of it. Plus like i said before, it would make nested tabs a lot better to work with, since you could do things like collapsing and expanding them and seeing all the different branches of parent and child tabs. It shouldn't be too much work, but I don't know how difficult pyqt (or pyside I forgot which one you're using) is to work with so maybe I'm asking for more than I thought. I encourage you to take a look at how edge does it if you're not familiar with vertical tabs. I don't like microsoft and I'm certainly not using their browser, but damn is it a killer feature, and they pretty much nailed how it should work on the first try. I hope it becomes a commonplace ui design in the future, because it's much nicer than horizontal tabs both for navigation and for management.
Media viewer has seams again at 125% app scale
I packaged hydrus for Guix (https://guix.gnu.org/) recently. The patches were waiting to be merged for a few weeks so it's still at version 495. But updating to new versions should be a lot quicker now after all the hard work has been done. The package is available since commit 9b8507df11, in the gnu/packages/image-viewers.scm file. Unfortunately I wasn't able to build the help files, I plan to fix it by the end of the next month. I also encountered an issue with connecting to ptr servers that doesn't happen under root priveleges, not sure why but it's likely not an issue with hydrus but guix instead. For people using hydrus from a directory: guix shell -D hydrus-network this command will give you an environment with all the stuff needed to run hydrus directly. Building help files is not possible yet though.
I had a good week, finally getting around to some server updates I had been planning for a long time. Unfortunately, I didn't have time to get to anything else, so tomorrow's release is only for users who run a hydrus repository, and their janitors. The server now reports some fast counts of its content, and petition processing has a couple small improvements. The release should be as normal tomorrow.
>>18131 >Yeah, a pretty general rule for hydrus downloaders is they have automatic brakes, and the main trigger for that is if they run into a whole page of something that has nothing in it (and in the case of subscriptions, nothing new in it). In almost all cases, a gallery page with no files signals the end of the search. Sometimes it is an empty page, sometimes it is a duplicate of the previous one, sometimes it is a pseudo-404 page. If you think that search page should have files in it, try right-clicking on the entry in the search log and hit 'open urls' to see it in your browser. Might be you have hit a limit like the sankaku thing where they won't give you more than 500 files or something (25 pages?) unless you are logged in. In this example, the problem page that my lolibooru gallery search got stuck on is this: >https://lolibooru.moe/post?page=207&tags=long_hair+-rating%3Asafe If I do search log > import new urls > from clipboard > only add new urls, and then go into "show search log" and right-click the URL I just imported and do "try again (and allow search to continue)", it will resume adding files and finding new pages. I'm almost certain that these aren't new URLs either since this search only has a few new images every few minutes it seems. I don't know if this is relevant, but I just thought Id add that theres no login script for the downloader that Im using, yet its able to access explicit images that you need to be signed in to view: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Lolibooru.moe Anyways, my question is: why do I need to essentially recreate the gallery search in order to start searching onwards from the formerly problematic page? Is this an annoying quirk of the downloader I linked? I have to try this on sankaku later on as well since I had this problem over there to
>>18162 I forgot to mention that I can't do the "try again (and allow search to continue)" trick on the original gallery query. I *have* to create a new gallery query in the manner I described above to resume finding images.
>>18149 OK, so I eventually found the report modes in the debug options, but their final messages don't get flushed to the log file before the crash, so that's not very helpful. This is pretty annoying and a bit distressing to have Hydrus crashing randomly and without any obvious patterns. What do I need for running from source? I'm thinking of trying to get tracebacks via the faulthandler module.
>>18164 Doh, I didn't look hard enough in the help files. I'll look into setting up a venv later.
>>18164 >What do I need for running from source? While writing a guix package for hydrus I made a complete list of its primary dependencies: ;; stuff needed for tests xvfb-run python-nose python-mock python-httmock ;; primary python dependencies python-beautifulsoup4 python-cbor2 python-chardet python-cloudscraper python-html5lib python-lxml python-lz4 python-numpy opencv ; its python bindings are a drop-in replacement for opencv-python-headless python-pillow python-psutil python-pylzma python-pyopenssl ;; Since hydrus' version 494 it supports python-pyside-6 but ;; pyside-2 is still supported as a fallback. python-pyside-2 python-pysocks python-mpv ; needs libmpv to be available python-pyyaml python-qtpy python-requests python-send2trash python-service-identity python-six python-twisted ;; those need to be in path but not required swftools ffmpeg miniupnpc python ; <- an obvious exception, it's required (text between ';' and a newline are comments.) There might be something else necessary for building help files, but I don't remember what - it's like two or three python packages at most so it should be easy get them from tracebacks.
https://www.youtube.com/watch?v=D5T9RByz9i8 windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.Windows.Qt6.-.Extract.only.zip Qt6 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.Windows.Qt6.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v498/Hydrus.Network.498.-.Linux.Qt6.-.Executable.tar.gz I had a good week. I got started on some long-delayed serverside janitor improvements, and I got so stuck it in was all I worked on! This release has nothing important for regular users, so if you are not involved in running a hydrus server, you can skip it entirely! server stuff Only for server admins and janitors! Improving the janitor workflow is now my 'big job' for the rest of the year. This is the first step of what I'd like to fit in to spare days over the coming weeks and months. This week I update the repositories so they cache counts of their various metadata. They can now quickly say 'I have 2,124,543 mappings' and so on. If you have a large server, it will take a few minutes to update as it counts everything up for the first time. Once you are booted, make sure you are in help->advanced mode, and on review services you will have a new 'service info' button. Click it and you can see all the numbers, including the current lengths of the petition queues. Anyone with an account can see these for now. If you want more privacy, I can figure something out, but tbh I think it is probably good for users to be able to see everything here. The numbers in the petition processing page are fed by this too. No longer will it manually count up and max out at 1,000 petitions--it will deliver the actual number real fast. Also, a sibling petition can now have both ADD and DELETE rows. This happens if the same account gives the same reason (like 'replacing a->b with a->c') to a sibling petition and a sibling pend. You now see those together, with that shared reason, and action it as one item. I suspect we'll need some more UI clientside to encourage using the same reason, but for now I have updated manage tag siblings to give the same 'reason' when you replace an existing sibling. Previously, this is where it would give a 'AUTO-CONFLICT...' style reason. Now, those things should be bundled into the same thing you see. This stuff changes some of the hydrus network protocol. Normally, I would update the network version number, but that requires everyone to update. Since this only affects advanced users, and I expect I'll be doing more in coming weeks, I am not updating the version number. An old client will run into errors if it tries to pull petitions from a new server, but I think a new client will be able to work with an old server. In any case, if you are a server admin or janitor, please update your clients and servers at roughly the same time this week, or you'll get some harmless but annoying parse/404 errors. As a side thing, as a server admin, if the service info numbers ever get borked, please hit 'regen service info' in your 'administrate services' menu. I've added extensive testing this week to ensure the update routines are mostly good, but I'm sure there are some complicated situations where the counting logic is dodgy. Let me know how you get on with it!
full list almost all the changes this week are only important to server admins and janitors. regular users can skip updating this week - overview: - the server has important database and network updates this week. if your server has a lot of content, it has to count it all up, so it will take a short while to update. the petition protocol has also changed, so older clients will not be able to fetch new servers' petitions without an error. I think newer clients will be able to fetch older servers' ones, but it may be iffy - I considered whether I should update the network protocol version number, which would (politely) force all users to update, but as this causes inconvenience every time I do it, and I expect to do more incremental updates here in coming weeks, and since this only affects admins and janitors, I decided to not. we are going to be in awkward flux for a little bit, so please make sure you update privileged clients and servers at roughly the same time - . - server petition workflow: - the server now maintains an ongoing fast count of its various repository metadata, such as 'number of mappings' and 'number of petitions of type x'. when you fetch petition counts, no longer will it count live and max out at 1,000, it'll give you good full numbers every time, and real fast - you can see the current numbers from the new 'service info' button on review services, which only appears in advanced mode. any user with an account key can see these numbers, which include number of petitions in the queue. I can make this more private if you like, but for now I think it is good if advanced users can see them all - in the petition processing page, sibling and parent petitions will now include both delete and add rows if the account and reason are the same. I'm aiming to get better 'full' coverage of a replace petition, so you can see and approve/deny both the add and the remove parts in one go. for fetching, these combined petitions count as 'delete' petitions, and won't appear in the 'add' petition queue - when users encounter an automatic conflict resolution in the manage siblings dialog, those auto-petitioned pairs are now assigned the same reason as the original conflicting pended pairs. they _should_ show up together in the new petition processing UI - as part of this, sibling and parent petitions are no longer filtered by namespace. you will see everything with that same account and reason in one go. let's try it out, and if it is too much, I will add filters clientside or something. since we are now starting to see add and remove together, we'll want to at least have the option to see everything - . - boring server stuff: - the petition object is updated to handle multiple actions per petition, and the clientside petition UI is updated appropriately - the server tracks 'actionable' petition counts as separate to the number of raw petition rows. some of this was happening before, but the logic is improved, including clever counting of the new petitions that include both add and delete rows - for when my count-update logic inevitably fails, there is now a 'regen service info' entry in the 'administrate services' menu for all repositories. numbers generated will be printed to server log - some unusual repo upload logic is cleaned up, e.g. if a user with 'create permission' uploads a sibling or parent, any pending rows for that content will now be properly cleared) - fixed a stupid swap logical bug where janitors who could only moderate siblings (and not parents) were only being given parent numbers and vice versa - all server services now respond to /busy check. it requires no authentication and just returns 1 or 0 depending on the current lock state - fixed a bug where tag siblings or parents that were denied would still make a new definition record for the child/bad tag - with all the fine number changes, fleshed out the server unit tests with more examples of submitting and altering content and then checking for numbers afterwards. now checked are: file add, file admin delete, mapping add, mapping admin delete, mapping petition, mapping petition approve+deny, parent add, parent admin delete, parent pend, parent pend approve+deny, parent petition, parent petition approve+deny - significant refactoring of the tail end of server content update pipeline. more things now go through logic-harmonised update methods that ensure count is reliable - did some misc server db and constant enum code cleanup - . - misc: - to match the new change in the server, in the client, tag and rating services now store their 'num_files' service info count as the new 'num_file_hashes'. existing numbers will be converted over during update - fixed a probably ten year old bug where 'num pending/petitioned files' had the same enum as 'num pending/petitioned mappings'. never noticed, since no service has done both those things - if the upload pending process fails due to an unusual permission error or similar, the pending menu should now recover and update itself (previously it stayed greyed out) next week Back to small jobs. I had planned to do a little janitor stuff this week and then do regular work, but these number updates killed me and I ended up in a rabbit hole of unit tests making sure everything was good enough. The goods news is I fixed some other long-time server issues in doing that, but the bad is I did nothing else. So, I'll do some small work and try to get to some github issues too. It would be nice if I could hammer out some final Qt6 problems so we can move to it fully in a couple weeks.
How do pixiv URLs get into Hydrus? When I get the converted touch/ajax/illust/ url and put it through the fetch test data it doesn't have everything that I could see in the same URL when I look at it in my browser. Specifically all the stuff on the translated tags seems to disappear once it goes into Hydrus. Is there something I'm missing? I desperately want Hydrus to grab the eng tags when it sees them. Even if it did grab them I'd still have to figure out how to grab those instead of the normal ones and I haven't done this kind of thing before. God I fucking hate Pixiv so fucking much.
>>18142 Just happened again, this time while trying to make changes to a parser. Random crashes are probably the coolest new feature unique to the QT6 version.
It'd be nice with a stock reason for petitioning tags for files that have been indiscriminately assigned tags belonging to other files (e.g. random-ass imagesets on e-hentai; twitter roundups on pixiv)
>>18169 It looks like you need to add a header that specifies the language. My browser sends the header "Accept-Language" : "en-US,en;q=0.5" So I added that to my http headers under network > data > manage http headers for pixiv.net. And then I edited the pixiv API page parser to grab translated tags as well. Here it is. After you add it, make sure you update the url class links to connect "pixiv file page api" to "pixiv file page api parser (with translated tags)". That's under network > downloader components > manage url class links
(85.42 KB 866x553 1644312933441.jpg)

ok, so i have a fairly small but still significant (~10000 pics+videos) collection of media. it's split about 80/20 between my phone and my pc correspondingly. i have built a home server and i would like to move both collections and merge them there to view them remotely. is it possible to do this with hydrus? would i just have to setup a hydrus server and then connect from the other devices?
>>18172 Thanks anon I really appreciate it
>>18165 OK, I got things running, but I had to get the latest version of opencv-python-headless instead of the version listed in the requirements. I was just getting a type error in the install script otherwise. Not sure if this will break anything, but I will be posting any useful tracebacks I get from random crashes that might occur this weekend. I also had to set 'pyqt6' instead of 'pyside6' in ENV; Is there any real difference?
What are the advantages of using multiple file services over just using tags to separate different kinds of files? is it strictly a ui and privacy thing or are there other kinds of differences like performance or something else?
(55.91 KB 568x1080 MWIT4DKRDRpMQHYj[1].jpg)

>>18176 https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html >This particularly matters when you are typing in search tags, since the tag you want, 'anatomy drawing guide', is going to come with thousands of others, starting 'a...', 'an...', and 'ana...' as you type. If someone is looking over your shoulder as you load up the images, you want to preserve your privacy. >I think the best simple idea for most regular users is to try a sfw/nsfw split. Make a new 'sfw' local file domain and start adding some images to it. You might eventualy plan to send all your sfw images there, or just your 'IRL' stuff like family photos, but it will be a separate area for whitelisted safe content you are definitely happy for others to glance at.
>>18177 so then it is just a privacy thing. I don't show my files to others so I'll just keep the one service then.
>>18178 There are other use cases: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html#advanced I can also think of having split services for different types of image media, like one for 2D, one for 3DCG and one for 3DPD. NSFW/SFW split is just the one use case Dev-kun thinks is most applicable to most users.
>>18138 >>18139 >Oh fuck, is this a much faster way to move tags from one image to another? So when you open the duplicate processing page, check the button at the top, 'edit default duplicate metadata merge options'. It is a pretty dense dialog that I have never really liked for being awkward, but you can set up tag copy from worst file to best. You can copy ratings, archive status, and URLs too. Notes will be added soon, and in the more distance future I'd like to give the whole thing a full pass. Some tags you always want to be copied over, some you never do. I think I'd like to make some templates for that sort of thing and have live sync, like we were talking about above, rather than one-time copy, but live sync is a whole order more complexity, so I am still thinking about it all. To get to grips with the duplicate processing page, and my current iteration of a duplicate filter, please check out this help page: https://hydrusnetwork.github.io/hydrus/duplicates.html It sounds to me like you have the basics of the process down, but I've written a filter to make it all a bit more convenient. Let me know what you think! I've always had mixed feelings, far more plans than what I've even had time to do, but I'll keep on pushing and always hope to improve it. >>18140 Damn. I'm pretty sure that is your problem. It sounds like some downloader objects got messed up somewhere--maybe a borked import of a new downloader, maybe something else. Your best bet is to hit up the other dialogs in that submenu. Try 'manage url classes' first, and remove anything 'nitter'/'twitter'/'tweet' related and then use the 'add defaults' button to restore back to the original files I put in the client. Clear out nitter and twitter stuff, then add them back in, and see if the API link is fixed. >>18142 >>18170 Damn, I am sorry for the trouble. Which version are you using? And what OS are you on? That 'No Qt Window found for event' is some logspam that sometimes appears in Linux terminals, usually when running from source. It is usually hidden in the built releases. It has been around for years, I never figured out what did it (and previous searches suggested it was a Qt-side thing, although I don't totally believe that), but it is generally harmless in my experience. It sounds like the Qt6 build is exposing it. Unfortunately, crashes are often difficult to track down. If you are on Linux, crashes are often caused by simple OS .so file conflicts, and a nice solution can often be to run from source. I will be expanding the help section on running from source soon, and there are some other projects too, like the AUR Package and the new flatpak that afaik make running from source easier.
>>18143 Thank you, I will fix that dupe 'close media viewer'. Sorry for the mpv issues. I can't do a huge amount since I don't handle any of that code (I just give it a window to use and a file to load, and then later a seek position to go to, and everything else is out of my hands), but I can schedule a look at updating the version of mpv we use in the built client. Yeah, and I'm sorry about the shortcuts. It has been a multi-year process to get to where we are, and while I try and fit little updates in when I have some time free, moving one more hardcoded panel to the custom system, this whole year I have been swamped with what feels like urgent work and I haven't found much time to do nice cleanup like that. I'm crossing my fingers the new year is going to calm things down, now I have multiple local file services and some other big bumps out of the way. >>18148 >>18149 >>18164 >>18165 Damn, I am sorry for the trouble. You are having the same issue as >>18142 . Sounds like you are also running Linux? I build Qt5 on Ubuntu 18.04 and Qt6 on 20.04, so these crashes could also be explained by that. Looks like you found the 'running from source' help. Please let me know if that fixes your situation. If it does, then this was the problem--some sort of .so file conflict between the Ubuntu 20.04 and your flavour. I'm going to be updating the 'running from source' help for Windows 7 users who can't run Qt6. If you can let me know what you found confusing about the help, that would be great. Maybe making it easier to find in the first place? >>18166 'mkdocs-material' is the one you need to build the docs. You can look up the mkdocs help--it is great--for more info, but if you want to play around, get it installed with pip and then run 'mkdocs serve' in the Hydrus base directory. It'll run a server and serve the docs live in your browser. 'mkdocs build' creates the html files and takes -d for a target directory.
>>18181 >Sounds like you are also running Linux? Nope, Windows 10. Please see my follow-up and question here >>18175 In any case, Hydrus didn't crash today while it's been running on source. I'll keep running from source for a few days to see if any crashes happen again. >I'm going to be updating the 'running from source' help for Windows 7 users who can't run Qt6. If you can let me know what you found confusing about the help, that would be great. Maybe making it easier to find in the first place? The docs search returns 'a quick note about Linux flavours' instead of 'running from source' when searching for the latter, so I had to look more than twice to actually find the correct page.
>>18171 Sure--what would the text be for that? Something like: "tags that were applied overbroadly to a group"? Except that sounds too stuffy. Maybe: "group-spammed tags that do not apply to these specific files"? "mass copy-pasted junk"? >>18173 You would be able to rig something together, and this stuff will become easier in the years to come, but I would not recommend the hydrus server to any new users. If you are completely new to hydrus, then please check out the help and walk through the starting steps to get a feel for the program. https://hydrusnetwork.github.io/hydrus/introduction.html If you don't like hydrus--no worries. If you do, then start expanding your collection and learn what you can do and how you might want to access things in different ways. Don't move files from your phone just yet, since that's the difficult part. Once you know more--or if you are already familiar with the client--then you can play around with the Client API. Users have made some cool (but technically complicated) tools that let you browse your collection on your PC from your phone. https://hydrusnetwork.github.io/hydrus/client_api.html I also know anons who have rigged some clever VNC situations with their phone. There are a lot of options, none of them super easy, but what to do depends on the specifics of your situation and preferences, and if you are new to hydrus you'll only know what you want with more experience. I would generally not recommend the hydrus server for anything except slowly sharing files between hydrus clients, and I expect it to be eclipsed in the coming years by just me writing some code so clients can talk to each other directly using the API. >>18175 Thank you, please keep me updated. I'll note that about opencv. Sounds like it could be updated in my requirements.txt. pyqt6 and pyside6 are two different libraries that both do the same thing--wrapping the Qt6 C++ dll/so files so python can use them. One of them is 'official', I think PyQt, one of them used to be more fully featured, I think PySide, but these days they are pretty much the same. A helper library called qtpy adds another wrapper layer so hydrus can use either. I prefer PySide since it doesn't bitch so much about small type errors, but whatever works for you. Also, if Qt6 really is your crashing problem, you can get 'PyQt5' and run that instead. EDIT >>18182 Damn, Win 10, that is odd. If things are still ok from source, that's great. I'll see if I can improve that bad search at all, too. Please continue to let me know how things work out. >>18176 >>18177 >>18178 >>18179 Yeah, a place to store your family photos that you can search with your sister looking over your shoulder is a primary use case, but there are a bunch of advanced hydrus autists who have figured out some clever workflows around moving files from one location to another. Since the system is new, I am still waiting on feedback on what works best and what is just a rabbithole of over-categorisation. The 'privacy' thing is really about partitioning things into easily separable locations. If you have 500 files that are very special and separable from the 200,000 files of other stuff, making a new file service for them can be helpful. Now this tech is in place, I expect to use it as a safe file selection system in future. For instance, if I get client-to-client comms going through the Client API, you could make a file service called 'share with dave' and give your mate Dave a key to a Client API share that only shared files from that service. Then you copy files to that location when you find something Dave might like. Nice easy and firm way to separate things without the complexity and occasional fuzziness of tags.
>>18174 No problem, I never thought that I could get translated tags from pixiv so I'm glad your post made me look into it.
>>18183 >Sure--what would the text be for that? How about "cross-contamination of collection tags", "homogeneous collection tags" or "flooded tags"? "mass copy-pasted junk" also works. Thanks.
I'm trying to download some tags from sankaku and to get around the issue of expiring links I added a 1 request per 10 minutes limit for the api domain. However instead of waiting 10 minutes between each page it says "overriding bandwidth in 30 seconds" and I can't figure out why that is happening.
If I choose "forget" from the pending public tag repository upload menu, will it remove the tags I added from my files?
>>18187 It should remove all additions (uploads/pendings) and removals (petitions) that have not yet been committed, yes. Choosing 'forget' won't undo anything that has already been committed, however.
Is there any particular reason why there's isn't an option to just set a selection of images as 'not related', outside of the duplicate filter? I realize I can set the images as alternates, but that's not really ideal. I suppose I can tag the images I know that are not duplicates or alternates of each other with an arbitrary tag and then search for the tag in the duplicate filter page and use the buttons to show random pairs and to set 'not related'.
Is there a way to (systematically) update or re-download archived files? Consider these use-cases: >downloaded files from a booru using a subscription, but afterwards someone has now added more tags to the original on the booru, so re-dowloading or updating those files will import the new tags >downloaded hundreds of Hentai Foundry files on Hydrus v496 or earlier and deleted hundreds, but now v497 has a crawler that includes artist description notes, so re-downloading or updating those files will download the artist description notes I understand if the answer is no, since it may be difficult to automatically resolve local changes like added tags.
>>18190 Automatically in Hydrus? Probably not, but you can manually copy out file source URLs, and feed those to an URL downloader page with import options to get tags and to force fetching pages for already recognized URLs and hashes. There is also nothing stopping you from writing something in Python for your purposes. I have no idea how or if Hydrus keeps track of which tags are locally added for remote services (despite your tag petitions and uploads always being reflected on your side), but I don't think there are any such records for the local file services. Mind you, you should avoid hammering any sites with too many requests per second/minute/hour/day, or you could end up getting IP banned. Feel free to correct me if I'm spreading misinformation.
>>18191 I did this with after updating the pixiv parser (>>18172) Select all the images that came from pixiv, copied all the urls and pasted them in a simple downloader, can handle around 20k without problems. But there were images missing, those that were deleted during duplicate filter, they still have the original URL but Hydrus won't think of it as a duplicated, because the original one from pixiv was deleted. I'm too lazy to re-delete and re-merge. I am not complaining, the most important tag for me is creator: and I overwrite it in every subscription.
I've requested a while ago if it would be possible to get incremental ratings/counters in addition to the fixed numeric ratings we have now. Since I saw ratings mentioned a lot in the more recent updates, I wanted to bring this up again, since I would use this a lot. It would probably completely replace the star ratings for me. Use cases: - Nut counter - you came, you press the nut button, and can make a hall-of-fame tab for the hottest pics/videos you have - Arbitrary and incremental ratings - often your daily perception of what is 1 star and what is 5 stars varies a lot - or you notice that these new pics you found are even hotter than the 50 that already have 5 stars. As far as the UI goes, my first thought would be to have "+1" and "-1" buttons in the rating interface (or +/- any configurable number), and/or to be able to enter any arbitrary float32 number directly in the rating interface.
>>18193 I support the nut counter idea because I already do this with a tag.
>>18193 I'd also like a nut counter. Right now I'm just using an on/off one star rating for my favorites, as I found rating every image on a scale was too much of a pain. Counters (with keybindings for adding and subtracting) would be great.
I get this error sometimes when changing the sort-order on a file page: v497, linux, frozen AttributeError 'list' object has no attribute 'random_sort' File "hydrus/client/media/ClientMedia.py", line 1875, in Sort media_sort_fallback.Sort( self._location_context, self._sorted_media ) File "hydrus/client/media/ClientMedia.py", line 3478, in Sort media_results_list.random_sort()
how do i download more than 500 files from sankakucomplex? a few weeks ago it used to be 1000 files max and i limited my searches accordingly by adding "date:<=2020-01-01" to my search query for example to reduce the result count to under 1000. now that the limit seems to be 500, this gets even more annoying if i want to have tagged images of my favourite artists. i know this is a sankaku issue and not a hydrus issue, but does anyone have tips to make this mass downloading more efficient? thanks in advance.
>>18197 Buy Sankaku gold.
I had a great week catching up on a variety of work, mostly bug fixes. I have nailed down more Qt6 issues, some sources of instability, a problem with the duplicate filter matching up pixel duplicates, and there’s also a new version of mpv for Windows users and an updated twitter downloader. The release should be as normal tomorrow.
Is there a way to escape colons in order to use them in non-namespaced tags? I tried to use the standard backslash but it just added the literal backslash and still had a namespace.
>>18200 You could use a different unicode character, like https://www.compart.com/en/unicode/U+A789
>>18183 Hey, I'm >>18142. Yeah, it sounds like the same thing as >>18164. I'm also running Windows 10 and not Linux, v497. I planned on waiting for v499 and trying source+Qt5 if it's still crashing. FWIW, I just had another crash a second ago, and the client was minimized in the background when it happened. It was only being interacted with via the API at the time (Hydrus Companion's URL checking).
>>18181 >Maybe making it easier to find in the first place? Personally, I don't think it's the hard to find, right where you'd expect it in the normal workflow. Getting Started>Installing>From Source
>>18202 I'm >>18164. I have been running from source for a few days now, and I haven't any crashes at all, although it might be because the faulthandler module just quashes them. The only tracebacks I've been getting, seems to be generated from mpv when playing video files, and it seems to be pretty much identical when this happens. Again, Hydrus has not crashed at all, not even when these tracebacks are printed. I do think I will try the v499 .exe to see if it's more stable. Windows fatal exception: code 0xe24c4a02 Thread 0x000043cc (most recent call first): File "<hydrus dir>\venv\lib\site-packages\mpv.py", line 634 in _event_generator File "<hydrus dir>\venv\lib\site-packages\mpv.py", line 855 in _loop File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 953 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00003570 (most recent call first): File "<hydrus dir>\hydrus\client\ClientFiles.py", line 2382 in MainLoopBackgroundWork File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00003738 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 367 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00004898 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 367 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00004bb8 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\client\metadata\ClientTagsHandling.py", line 551 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00000a48 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait
[Expand Post] File "<hydrus dir>\hydrus\client\ClientDownloading.py", line 250 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00001104 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\client\networking\ClientNetworking.py", line 480 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x000016e8 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\client\ClientCaches.py", line 1126 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00000efc (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\queue.py", line 180 in get File "<hydrus dir>\hydrus\core\HydrusDB.py", line 817 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00001db0 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 671 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x00002d70 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 671 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x000024c0 (most recent call first): File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 324 in wait File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 607 in wait File "<hydrus dir>\hydrus\client\importing\ClientImportSubscriptions.py", line 2019 in MainLoop File "<hydrus dir>\hydrus\core\HydrusThreading.py", line 401 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x000037a4 (most recent call first): File "<hydrus dir>\venv\lib\site-packages\twisted\internet\selectreactor.py", line 38 in win32select File "<hydrus dir>\venv\lib\site-packages\twisted\internet\selectreactor.py", line 100 in doSelect File "<hydrus dir>\venv\lib\site-packages\twisted\internet\base.py", line 1328 in mainLoop File "<hydrus dir>\venv\lib\site-packages\twisted\internet\base.py", line 1315 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 953 in run File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 1016 in _bootstrap_inner File "<%localappdata%>\Programs\Python\Python310\lib\threading.py", line 973 in _bootstrap Thread 0x000046b8 (most recent call first): File "<hydrus dir>\hydrus\client\ClientController.py", line 1619 in Run File "<hydrus dir>\hydrus\hydrus_client.py", line 234 in boot File "<hydrus dir>\client.py", line 14 in <module>
>>18197 Are you logged in? I am downloading all my votes right now month by month using the date filter and some had over 3000 votes. Works fine. I'm using this downloader: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Sankaku Shows up as "sankaku chan beta tag search" in hydrus. I also removed ALL bandwidth limits. It's downloading an image every second or so and I'm yet to get throttled. The only annoying part is that for months with more results I have to pause the search every 1000 results so the import has time to catch up. >>18186 Otherwise the search finds all the images very quickly and the links expire by the time they are imported.
https://www.youtube.com/watch?v=OIFASfPkw9g windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.Windows.Qt6.-.Extract.only.zip Qt6 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.Windows.Qt6.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v499/Hydrus.Network.499.-.Linux.Qt6.-.Executable.tar.gz I had a great week catching up on bug reports. highlights The changelog is quite long this week. It is mostly smaller bug fixes that aren't interesting to read. Some users have reported crashes in Qt6, and I got one crash IRL myself, which is very rare for me. I am not totally sure what has been causing these, but I fixed some suspects this week, so let me know if things are better. I am also updating mpv for Windows users this week. It is a big jump, about a year's worth of updates, and I feel like it is a little smoother. It is also more stable and supports weirder files. And thanks to a user's work, there is an expansion to the twitter downloader. You can now download a twitter user's likes, and from twitter lists, and--if you can find any--twitter collections. If you are a big 'duplicates' user, there is a subtle change in the potential duplicates search this week. Check the changelog for full details, but I think I've fixed the 'I set "must be pixel dupes" but I saw a pair that wasn't' issue.
full list - mpv: - updated the mpv version for Windows. this is more complicated than it sounds and has been fraught with difficulty at times, so I do not try it often, but the situation seems to be much better now. today we are updating about twelve months. I may be imagining it, but things seem a bit smoother. a variety of weird file support should be better--an old transparent apng that I know crashed older mpv no longer causes a crash--and there's some acceleration now for very new CPU chipsets. I've also insisted on precise seeking (rather than keyframe seeking, which some users may have defaulted to). mpv-1.dll is now mpv-2.dll - I don't have an easy Linux testbed any more, so I would be interested in a Linux 'running from source' user trying out a similar update and letting me know how it goes. try getting the latest libmpv1 and then update python-mpv to 1.0.1 on pip. your 'mpv api version' in _help->about_ should now be 2.0. this new python-mpv seems to have several compatibility improvements, which is what has plagued us before here - mpv on macOS is still a frustrating question mark, but if this works on Linux, it may open another door. who knows, maybe the new version doesn't crash instantly on load - . - search change for potential duplicates: - this is subtle and complicated, so if you are a casual user of duplicates, don't worry about it. duplicates page = better now - for those who are more invested in dupes, I have altered the main potential duplicate search query. when the filter prepares some potential dupes to compare, or you load up some random thumbs in the page, or simply when the duplicates processing page presents counts, this all now only tests kings. previously, it could compare any member of a duplicate group to any other, and it would nominate kings as group representatives, but this lead to some odd situations where if you said 'must be pixel dupes', you could get two low quality pixel dupes offering their better king(s) up for actual comparison, giving you a comparison that was not a pixel dupe. same for the general searching of potentials, where if you search for 'bad quality', any bad quality file you set as a dupe but didn't delete could get matched (including in 'both match' mode), and offer a 'nicer' king as tribute that didn't have the tag. now, it only searches kings. kings match searches, and it is those kings that must match pixel dupe rules. this also means that kings will always be available on the current file domain, and no fallback king-nomination-from-filtered-members routine is needed any more - the knock-on effect here is minimal, but in general all database work in the duplicate filter should be a little faster, and some of your numbers may be a few counts smaller, typically after discounting weird edge case split-up duplicate groups that aren't real/common enough to really worry about. if you use a waterfall of multiple local file services to process your files, you might see significantly smaller counts due to kings not always being in the same file domain as their bad members, so you may want to try 'all my files' or just see how it goes--might be far less confusing, now you are only given unambiguous kings. anyway, in general, I think no big differences here for most users except better precision in searching! - but let me know how you get on IRL! - . - misc: - thank's to a user's hard work, the default twitter downloader gets some upgrades this week: you can now download from twitter lists, a twitter user's likes, and twitter collections (which are curated lists of tweets). the downloaders still get a lot of 'ignored' results for text-only tweets, but this adds some neat tools to the toolbox - thanks to a user, the Client API now reports brief caching information and should boost Hydrus Companion performance (issue #605) - the simple shortcut list in the edit shortcut action dialog now no longer shows any duplicates (such as 'close media viewer' in the dupes window) - added a new default reason for tag petitions, 'clearing mass-pasted junk'. 'not applicable' is now 'not applicable/incorrect' - in the petition processing page, the content boxes now specifically say ADD or DELETE to reinforce what you are doing and to differentiate the two boxes when you have a pixel petition - in the petition processing page, the content boxes now grow and shrink in height, up to a max of 20 rows, depending on how much stuff is in them. I _think_ I have pixel perfect heights here, so let me know if yours are wrong! - the 'service info' rows in review services are now presented in nicer order - updated the header/title formatting across the help documentation. when you search for a page title, it should now show up in results (e.g. you type 'running from source', you get that nicely at the top, not a confusing sub-header of that article). the section links are also all now capitalised - misc refactoring - . - bunch of fixes: - fixed a weird and possible crash-inducing scrolling bug in the tag list some users had in Qt6 - fixed a typo error in file lookup scripts from when I added multi-line support to the parsing system (issue #1221) - fixed some bad labels in 'speed and memory' that talked about 'MB' when the widget allowed setting different units. also, I updated the 'video buffer' option on that page to a full 'bytes value' widget too (issue #1223) - the 'bytes value' widget, where you can set '100 MB' and similar, now gives the 'unit' dropdown a little more minimum width. it was getting a little thin on some styles and not showing the full text in the dropdown menu (issue #1222) - fixed a bug in similar-shape-search-tree-rebalancing maintenance in the rare case that the queue of branches in need of regeneration become out of sync with the main tree (issue #1219) - fixed a bug in archive/delete filter where clicks that were making actions would start borked drag-and-drop panning states if you dragged before releasing the click. it would cause warped media movement if you then clicked on hover window greyspace - fixed the 'this was a cloudflare problem' scanner for the new 1.2.64 version of cloudscraper - updated the popupmanager's positioning update code to use a nicer event filter and gave its position calculation code a quick pass. it might fix some popup toaster position bugs, not sure - fixed a weird menu creation bug involving a QStandardItem appearing in the menu actions - fixed a similar weird QStandardItem bug in the media viewer canvas code - fixed an error that could appear on force-emptied pages that receive sort signals next week We are due a cleanup week, and I think I need one. Just some simple background work for a bit so I can rest, and I'll fit in some other small work like this week. As usual, I don't have anything exciting planned for the anniversary of v500. I was thinking about making that the week for the exclusive switch to Qt6, but with some users still getting crashes, I am less keen. Thanks everyone!
(1.05 MB 1660x2185 (you).jpg)

>>18207 >the anniversary of v500 Impressive. Thank you so much anon.
is there a way to make gallery searches import files from oldest to newest, like how subscriptions do, instead of newest to oldest?
>>18208 I wanna lick that pony pussy.
(11.20 KB 542x227 sad error.png)

im trying to create a hydrus server on a remote machine (ubuntu). i have the server running on port 45870, and an nginx reserve proxy redirecting http://hydrus.mydomain.com:80 and https://hydrus.mydomain.com:443 to http://127.0.0.1:45870. whenever i try to create an administrative service on my windows client, the client will hang indefinitely if i use port 443, and if i use port 80 i get pic rel. its definitely not a DNS resolution problem. any ideas?
(13.26 KB 582x432 sad site.png)

>>18211 heres the nginx site. i also tried changing the proxy_pass line to https but to no avail.
>>18183 >>18204 >>18202 here. 499 crashed after an hour. ```v499, 2022/09/08 23:38:32: hydrus client started v499, 2022/09/08 23:38:35: booting controller… v499, 2022/09/08 23:38:35: booting db… v499, 2022/09/08 23:38:35: checking database v499, 2022/09/08 23:38:37: updating db to v498 v499, 2022/09/08 23:38:37: updated db to v498 v499, 2022/09/08 23:38:39: updating db to v499 v499, 2022/09/08 23:38:46: updated db to v499 v499, 2022/09/08 23:38:46: preparing db caches v499, 2022/09/08 23:38:46: initialising managers v499, 2022/09/08 23:38:50: booting gui… v499, 2022/09/08 23:38:50: starting services… v499, 2022/09/08 23:38:50: The client has updated to version 499! v499, 2022/09/08 23:38:51: Running "client api" on port 45869. v499, 2022/09/08 23:38:51: services started v499, 2022/09/08 23:40:03: Physically deleted 3 files and 3 thumbnails from file storage. v499, 2022/09/09 00:12:38: public tag repository sync: processing updates v499, 2022/09/09 00:13:05: public tag repository processed 47,746 definitions at 1,791 rows/s v499, 2022/09/09 00:26:56: public tag repository processed 550,998 content rows at 663 rows/s v499, 2022/09/09 01:10:46: database maintenance - analyzing done! v499, 2022/09/09 01:40:05: Physically deleted 5 files and 5 thumbnails from file storage. ```
>>18213 Okay, no idea how to do code blocks, but whatever, that's the whole log before the crash.
Upgraded from v497 QT5 -> v499 QT5. Upgrade process is to copy files from the zip, overwriting all files in the hydrus destination folder. Video files do not play in the client now, they all say to open externally. I get a message saying 'MPV is not available' Rolled back to v497 QT5, all appears well again.
>>18207 Oh shit, mpv update. I'm on Linux, haven't done extensive testing but searching in videos and GIFs seems to be working much better. My about page shows 1.109 for the mpv api though.
Maybe it's just my installation, but I can't seem to add any new PTR siblings to "tagme:artist" or seemingly any of the tag's other siblings. Hydrus hangs immediately upon clicking the 'add' button in the tag sibling dialogue.
WHEN DO WE GET A DUMB ANDROID CLIENT TO CONNECT TO A SMART SERVER ON THE SAME NETWORK
When will there be an option to automatically delete exact duplicates (without review)?
Will there ever be a tool that tags images you add from on your pc automatically by looking them up on like r34 or sankakucomplex? (So like a bot that searches the original post and just takes the tags from there)
so I was having trouble booting after an update tried a clean install but I copied over the db folder not seeing anything in my client now I overwrote my mappings didn't I
>>18221 to add I booted up the extracted v499 client and then I copied it over to update that's probably what fucked me I imagine no backups too RIP
>>18220 Most boorus support searching by MD5, you can construct the API search link and feed that to hydrus. I do it to import everything downloaded by grabber
(32.71 KB 795x594 5fe.jpg)

>>18222 Press F to express your condolences. F
>>18206 Now it crashes upon launching the dupe filter. Win7, Qt5 ZIP install. Where do I see the logs, if any are written?
Hey, everyone. My dad died last night. I don't know how any of the timetable is going to work out yet, but I am not going to be able to keep up with my hydrus schedule for the time being. Please expect v500 and my general responses to be a week or two late. I will post to all my normal channels when I am back to regular schedule.
>>18226 Sorry to hear about that. Take whatever time you need and don't worry about hydrus. We'll be here when you're ready to come back.
>>18226 Very sorry for your loss. Family comes first in difficult times like these. Take all the time you need.
>>18226 Oh shit that's terrible! Well just keep us up to date whenever you can but yeah family first man
>>18226 Sorry to hear. F to pay respects. Take whatever time you need. We're not going anywhere.
>>18226 ah shit sorry for your loss
>>18226 good thing i read this before bumping my issue sorry to hear that, fren
Under windows 10 499QT6 seems to have introduced instabilities, which are fixed upon moving the database back to 498QT6 Images downloaded in 499 say No media to display, before crashing, though images are present on disk and thumbs are correct. It displays images downloaded in previous versions, but crashes almost instantly. File integrity/presents check report no problems. All file prompts crash it (import file/folder, migrate db directory selection) even when use Qt file/directory selection dialogs is enabled. Nothing is logged for these crashes
>>18234 Maybe triggered by continuing a gallery download started in 498, as the No media to display isn't shown for images downloaded in 499 from a new downloader, still crashes shortly after opening them though
>>18225 logs are in the db folder. "client - [year]-[month].log"
>>17886 >I'd estimate it takes a couple months of background work to fully catch up Woah fuck this
>>18181 In regard to the "Running from source" docs, venv/bin/activate seems to be a no-go. I don't know if it's out of date or version specific or what, but for my python 3.9.6 install, it's venv/Scripts/activate And sorry to hear about you dad, dev.
>>18239 Hi, I'm an idiot. Disregard my fuckery. Except for the last line. My condolences.
>>18237 Yeah unless your file collection is both huge and constantly growing, the ptr isn't really worth it imo. I used to sync with it but I stopped around last year.
>>18213 here. I'm trying to run from source, but can't get it to work. Traceback (most recent call last): File "C:\x\Hydrus Network\hydrus\hydrus_client.py", line 19, in <module> from hydrus.core import HydrusBoot File "C:\x\Hydrus Network\hydrus\core\HydrusBoot.py", line 3, in <module> from hydrus.core import HydrusConstants as HC File "C:\x\Hydrus Network\hydrus\core\HydrusConstants.py", line 5, in <module> import yaml ModuleNotFoundError: No module named 'yaml' I have PyYAML, but... IDFK.
>>18226 Sorry to hear man, fuck. Take care of yourself.
>>18226 my condolences
>>18226 Oh man, that really sucks. Please take as much time as you need to grieve!
>>18226 Sorry for your loss.
(264.38 KB 1920x1080 F.jpg)

>>18226 Damn, sorry to hear. Take all the time you need. >>18241 I understand why the PTR is the way it is, but I think that most anons wouldn't mind connecting to a central server which just spat out PTR tags for a given hash.
>>18226 Oh man. I'm so sorry to hear that, Dev. Stay strong, brother.
>>18243 Ignore this, I just wasn't thinking. Accidentally calling the py file directly. My condolences again, dev. Stay strong.
Thanks everyone for the kind comments. There have been dozens of things to do, but the bulk of the immediate work is now done. My Dad's death was completely unexpected, so this has been an awful shock, far earlier than I ever thought, but we had a great relationship, and I have no big regrets hanging over me. For hydrus, I am missing work and keen to get back to it when my schedule is free again. More than anything right now, I am simply busy and tired. I won't rush things, but I'll have some empty days coming up over the next week and would like to catch up with my messages at least. Rather than put out an anemic release on the 21st, I'll work that day instead, let it all roll over, and be back to normal schedule on Saturday the 24th, with a proper full-changelog v500 out on the 28th. Thanks again!
>>18252 Nice to hear that you were good with your pops. Take all the time you need, lots of things to take care of when a family member passes. All the best to you and yours.
>>18252 No rush OP. Take your time.
>>18193 >>18194 >>18195 I also implemented a hacked nut counter feature; I added a separate 1-10 rating called "finish", increment it by 1 every time, and have a page that has a filter of: >system:rating for finish > 0/10 Though I sort by creator-series-title-etc in order to properly sort comic pages, I wonder if there's a way to sort by that and sort by nut rating at the same time. >>18079 This is EMBARRASSING but I cannot for the life of me figure out how to run Hydrus from a batch file (or shortcut) with a specific environment variable (99% of google results tell you how to change the env variable in Qt Creator). I could just go into Windows settings and add the environment variable right there, and it works just fine, but it may be nice for me to know how to make it app-wide. btw, you were right, the correct env var is: >QT_ENABLE_HIGHDPI_SCALING=0 I mention this because I also am having a lot of UI >100% scaling issues as well, but mostly in that the main window doesn't look right at 125% (media viewer looks fine!) Well actually I say "issues" and more like "it works correctly, I just don't like how it looks." At 125% the main window/media browser only shows about seven, chunky thumbnails instead of the normal nine (since the thumbnails are bigger, it shows fewer at a time), and the rest of the GUI is big and chunky (funny enough, while I cannot read Chrome pages at 100% which is why I bumped up to 125%, Hydrus at 100% looks completely fine on my monitor). Well, I was able to get it back to how it looked in Qt5 with the env variable and the media viewer still looks okay with the HIGHDPI_SCALING var set to 0, so I guess that this all works for me. I wonder if it would be worth adding to a help page or manual for future users. ps: hydrusdev your program has really improved my life, I'm so appreciative of your work.
>>18255 >the main window/media browser only shows about seven, chunky thumbnails instead of the normal nine Sorry, should've specificed I meant that it shows seven thumbnails per row at 125% with HIGHDPISCALING turned on, vs 9 per row with it turned off.
All the wheels are now spinning and I feel ok about the coming week. I'm less upset than I thought I would be. It just sucks, a lot. I've also got free time and energy now, so I'd like to be productive so I can feel better on that end too. I'll try to catch up with the whole thread, and since it is now bumplocked, I'll start the process to get it archived at >>>/hydrus/ and make a new one. I figure the hydrus user-base runs on the younger side, so most of you haven't lost a parent yet. The actual process of it has not been nearly as awful as I feared. I'd recommend you talk to your parents now while they are alive about what sort of funeral they eventually want, and make sure everyone in the family are all on the same page. You are going to be picking a casket, choosing music and flowers, making decisions on embalming and cremation and tissue donation, whether anyone is going to speak, if there will be a service and what sort, all the nitty-gritty 'someone has to decide, and now it is your turn' stuff. You can even plan your own, to relieve your children of the problem, which we are probably going to do for my Mum once things have settled down. Our family also sorted out wills a decade or so ago, and I can't recommend it highly enough. Spend a few hundred bucks now to save yourself a nightmare later. Anyway, if you are terrified of your parents dying, let me tell you it doesn't have to be the worst thing--it could be one of the most common experiences our ancestors had--but you have to be pragmatic and maybe have some difficult conversations now. >>18185 I fit this in to v499, I expect you already saw if you do this often enough. I went with 'clearing mass-pasted junk'. >>18186 The downloader forces some minimum and maximum time deltas in order to keep things working. If I remember correctly, for galleries' file search, it won't wait more than 30 seconds or so between page turns because if it obeyed normal bandwidth rules, it could be four hours between successive fetches, and if a lot of uploading happened in that four hours, the downloader would lose its place and stop since it would see stuff it fetched in a previous page. For standard booru page fetches, it also forces the raw file download a few seconds after the html/json page fetch because some sites have 'tokenised' file download links that time out after a minute or two. That said, check the options under options->downloading. Under the 'gallery downloader' and 'subscriptions' headings, you can set some global force-waits. Maybe if you bump those up to two minutes or similar, things will still work ok, but you'll get your slower access? Let me know that works for you. I haven't tested such slow access, so there may still be a hardcoded 30 second rule coming in or similar. If you click the little 'cog' icon when the job is waiting on bandwidth, it should tell you exactly what domains it is waiting on, which may help investigating what is and isn't holding things up. If we can pin down what doesn't work for you, I can expose these hardcoded waits in more options etc... and see if we can get you working. I also expect, one day, to export almost all the 'connection' and 'downloading' options to a domain manager that will track separate options for different domains, much like how the bandwidth rules work, so one day you'll be able to set a long delay for sank but a shorter default for all other sites. >>18189 'Not related' is a positive, permanent relationship, not a dismissal, and if I remember right, the way it actually applies in the database is fairly inefficient. I think it may require a connection for every pair combination in the group, so for a group of ten files, it makes I think 10!/(2!*8!), or 45 new database records. Since it is advanced and awkward, I don't think I expose it much to the normal user. In general, the 'not related' relationship's main use is to say that a potential duplicate, as you see in the duplicate filter, was a false positive. In general, false positives tend to be fairly rare. Are you finding you are getting a lot of them? Can you say what search distance you have searched? Could these be being found at, say, hamming distance of 10 or more? If you would like to blat a command that is more of a dismissal, like 'of these 10 files, if there are any potential duplicate relationships, then set those as false positive', then perhaps I should add that. Maybe that more human idea should be the general default of how I present this idea to the user, and I handle making the bullshit technical efficiency stuff in the background.
>>18190 >>18191 >>18192 I am going to figure out a nice technical solution to this in the future. The idea of going back to check for new metadata has long been a request, and I've been hesitant to do a quick bad job because, as said, I don't want to hammer sites or let someone who doesn't know what they are doing redownload 2 million html pages every month just to capture 5 new tags. As well as the rudeness of bandwidth, it also simply takes a whack of CPU to parse a document, and it adds up, so I don't want to add an equivalent CPU load to your existing downloader pages on top of everything. However, with modified date parsing done and now note parsing coming in, almost all of us long-time users want to do at least one slender re-fetch of pretty much everything. I am thinking of doing something like the file maintenance queue under database->file maintenance. You'll be able to set up any basic refetch rules and set which files to do (e.g. 'for every file I have a known url for, hit that url and update metadata' or 'for all my HF files imported before 2018, do it'), and then it will trickle that work out over weeks and months. The scale of some of these things is pretty crazy though--one file every ten seconds is 8,600 files a day, only 260k per month, when running 24/7. I've probably got 2.5 million downloaded files in my client, so that's most of a year to do one sync at that sort of decent pace. As always, I think we'll be wise to think of this work as a marathon, not a sprint, and recognise that we often can't keep up with our inboxes anyway, so the actual speed here isn't the largest issue. I sure don't process 8,600 files a day from my inbox, so I'm in less of a hurry. Just to address >>18190, I don't remember which tag came from which site, and I don't remember when tags were added to your client. Also yes, if this is important to you, I'm pretty sure you can wangle something with the Client API. Search your files, pull their known URLs, then trickle-queue those URLs into a special downloader page set to force a page fetch even if the file is already in db. (which is in 'tag import options' in help->advanced mode, if you want to play around with it, but be careful--it is powerful and dangerous, so don't set it as default). >>18193 Thanks, I remember talking to you about this before. I have limited support for this now, but not ideal support. Under file->shortcuts and the 'media actions' set, you can set a 'numerical rating increment/decrement command' for a shortcut, which would let you fill up a fixed number of stars/bubbles. Twenty stars in a row would probably look pretty stupid though, and it would obviously be capped at twenty. It might last you for the meantime though, and would be something you could copy across to a new system when it was ready. I like this idea a lot, and I am in a better position to work on new rating styles now. I will write it down and put it in my medium jobs queue.
>>18196 Thanks, I saw this just before 499 and I think I fixed it in that. Let me know if you still have any trouble! >>18200 Try double colon. It'll swallow the first and consider you are typing in a tag with empty namespace. "::)" for a smily face, I think some combination like that should work. I just tested the paste button in 'manage tags', and that'll accept ':)' and convert it to '::)' for you too, preserving what you actually pasted. >>18202 >>18204 >>18213 >>18243 >>18251 Thank you for these updates. I am increasingly confident that there is a dll problem in the build. It is part or combination of Qt, PyInstaller, and/or mpv. When I did my normal custom build of 499, I couldn't load the updated mpv for ten seconds without crashing, but the github build was fine. Most people seem to also have no trouble with it, but then some like you are getting similarly quick crashes, yet running from source is also fine. This suggests it isn't my code causing the crash, at least not directly, but some borked dll call somewhere. I don't have any excellent ideas, but since you have figured out how to run from source, please keep doing so for the time being. I always have to suggest that to Linux users, who have even work build compatibility problems. Unfortunately, because of the nature of python, getting crash debugging information is tricky. At my level, I never know about the crash and can't catch it or report it to log--the process exits immediately. It looks like you were able to recover some useful crashdump info in >>18204 . Almost all the workers are waiting there, but as you noticed, it looks like mpv was working on events. It is a shame that 499 gives you trouble too, since that is the new mpv-2.dll, and I had hoped it would be more stable. I'll keep fighting on this. I have to do some mickey mouse shit in order to get mpv to work at all and not crash immediately, so I will explore around there. Maybe I am allowing mpv's event processing to touch UI elements when it shouldn't. Please keep me updated, your work here has been helpful. >>18202 If you would like to try running from source, it takes a little bit of work, but the help is here: https://hydrusnetwork.github.io/hydrus/running_from_source.html . I'd be happy to help, too. A simpler thing to try in the meantime might be swapping out the 'mpv-2.dll' in your install dir for the 'mpv-1.dll' from a recent older version (v497 etc...). You probably have both in your install folder now, so try renaming mpv-2.dll to mpv-2.dll.old. Then boot the client and check help->about. It should say an older mpv api version. Fingers crossed, that crashes less for you. Otherwise, please try v500 and let me know how you get on. I want to get this all sorted if I can. >>18209 Check the booru/site itself. Boorus tend to have some metatags you can insert into the query to do some neat stuff. Like this: https://danbooru.donmai.us/wiki_pages/help%3Acheatsheet If you ctrl+f 'order' in that, there are several options. If you added 'order:id' to your search, I think that'll do oldest to newest. Most boorus run on similar or forked engines, so parts of that cheat sheet will work in many places.
>>18211 >>18212 I'm afraid I am not expert enough about setting up sites and forwards these days to talk too cleverly. I'll say the nicest way to test is just to load the server address into your web browser. As an example, check here: https://ptr.hydrus.network:45871/ That is the PTR. It has an expired cert, but if you let your browser go through anyway, you'll get the landing page with an ASCII lady. My guess is that because hydrus's https uses a self-signed certificate, that's what's causing the trouble here. I think the http/80 result is giving you an error because your client's network engine is probably trying to access it using cleartext but it is failing. The server is not clever enough to upgrade http to https, at least I have no tech for that, so that's probably the nature of the weird error there. The hang I am not sure about, but if you try the same basic query, https://hydrus.mydomain.com:443, in your web browser, see what happens. You might get a proxy error page or something. It might also be that 443 is a protected port for the proxy service, with special hardcoded rules? Not sure. Maybe a web server port has some special security rules for ssl that I don't or can't obey up in duct-taped 45870. I specifically tell hydrus not to verify https traffic across the hydrus network itself (because of the self-signed certs), so the hang is even more confusing. Yeah, my guess is the proxy service is unhappy/confused about ssl. I'd recommend to take the http map off, since that'll never work afaik. I know some guys on discord have a lot of experience setting this stuff up, for the PTR and other projects, so if you can stomach that, you might find more luck looking for help there. I know some guys had to set up some kind of flag for 'don't sperg out that the https certificate is self-signed'. Also, if you know how, in your web browser, if and when you get the ASCII lady to show up, check the http response headers. I know some proxy services will overwrite the headers here, although I guess those would be ones you had to share the ssl certificate with previously. >>18214 Just a side thing, use 'code' with square brackets and an html tag style slash to close the block, as at the bottom of here: https://8chan.moe/.static/pages/posting.html >>18215 Thank you for this report. I am sorry for the trouble. It looks like the new mpv is causing a variety of issues. If it is convenient, could you try extracting the Qt6 version on your desktop? Just make a fresh install of Qt6 499 and try importing a video to it. Does it play? If it does, it could be Qt5 doesn't work well with the new mpv. If it doesn't, there's a deeper mpv problem going on, and I'll ask you to hit help->about, which should make a popup with more detailed information about why mpv could not load. If nothing works, you may be able to update to 499 but delete the new mpv-2.dll that comes with it. The old mpv-1.dll that is probably still in your install folder should be fine, but if it is not, that is also useful information to know.
>>18216 I haven't done the big change on the Linux side just, so this may be placebo, but I've also done some other video/mpv work recently that may have reduced some lag for you. As I asked in the v499 changelog, some Linux users tested out the new python-mpv wrapper, and they said it works well, so I will be updating it in the Linux build for v500. I think you'll still see 1.109 for now, but when libmpv1 either in my build or your system (through apt-get) gets updated, the new python-mpv should handle it well. >>18217 Damn, thank you for this report. I am sorry for the trouble and will check this out. I did work on that loop-checking logic recently, I thought to improve things and help bunch stuff together for the janitors, so perhaps there is a new bug in there. >>18218 You know I was reading this recently https://support.nordvpn.com/General-info/Features/1845333902/What-is-Meshnet.htm , which as I understand it is basically user-friendly hamachi. If the other vpn providers soon offer the same tech, and consumers get access to easy virtual local networks, some of our hydrus-specific pains is the ass may evaporate. All the shit about setting up proxy forwards and self-signed ssl certificates and things disappears if the secure network side of things is handled by a vpn provider and your phone thinks it is talking to 192.168.x.x. That said, if you are already on the same network, your best bet is just to boot up the Client API and try out Hydrus Web. If you already have 192.168.x.x:45869 visible and just want to sit on the couch with your tablet and browse things, this is possible right now without too much strife: https://hydrusnetwork.github.io/hydrus/client_api.html >>18219 Probably early next year. I'm going to start with the very easy case of jpeg+png exact pixel dupes (keep the jpeg), just to get a skeleton of an auto-dupe-filter started, and then flesh it out with cleverer rules from there. Everything will be completely optional, default off. My dream is that in five years we have all sorts of tech making weighted decisions about dupes, and 95% of stuff is handled without you ever seeing it, including immediately on import. >>18220 Maybe. We've played with this tech before. A legacy system called 'file lookup scripts' allows you to do this on a per-file basis in manage tags, but I think you have to turn it on under options->tag suggestions. If I do it, it would hook into the slow-burn system I describe here >>18258 since the same bandwidth and workflow concerns apply. That said, I've dragged my feet on this general question a bit since there basically isn't a 'nice' technical way to find tags for files, which is exactly why I wrote the PTR. The PTR has proved so successful it is now too big for many users, which is its own issue, but in general if you want to share file tags, most sites just aren't optimised for it, and you are downloading kilobytes of json or hundreds of kilobytes of html for twenty five words. Whatever answer we figure out for an automatic system is going to be a little ugly and slow, no matter what. But yeah as >>18222 says if you are willing to put the work in, the Client API let's you wangle whatever you like. I don't know their explicit rules, but if you have a premium account for sank I expect they let you do as many md5 lookups as you want as fast as you want. I know that most explicit lookup sites like iqdb or saucenao put a good limit, something like 100 lookups a day, which works for humans but won't work for an automatic system.
>>18221 >>18222 I am very sorry, I think you have overwritten your database, yes. You still have your actual media, the jpegs and so on, but your archive record, known urls, the session and subscriptions, all the stuff you see in the UI, has probably been overwritten. You probably already have, but I recommend you have a very good think about if there is any chance an old copy of your database might sit in your recycle bin or an old usb stick somewhere. If there is one, we can probably stitch a rollback together. If not, you may be looking at starting a new client and then importing your old install_dir/db/client_files structure. You would be looking at the 256 'fxx' subdirectories in that folder. For the backup situation, I am sorry. I know how it feels to lose your stuff. Although it is a shame, it is usually emotional kicks like this that ultimately give you the motivation to figure out a good backup solution in future. You don't want to lose shit like this again, so you put the time and money in in future. You don't have to do it today, but if you have a planner, make sure in a week or two that you come back and plan a proper backup routine. Not just for hydrus, but your documents and writing, any art or programming you do, anything digital, so if your machine blows up or your place burns down you still have a copy on a USB stick in your backpack, whatever works for you. My backup help is here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up And a special message for you is here: https://hydrusnetwork.github.io/hydrus/after_disaster.html I'm happy to help with recovery or setting up your future backup, just let me know. >>18225 Thank you for this report. I will investigate this. Unfortunately, crashes (as in the program halts and exits immediately) will not record any information in the log, but if the program simply halts and does not respond, there may be something in the log as >>18236 says. As you are a Win 7 user and Qt6 will not work for you, you may be looking at the option of running from source soon. The users in this chain >>18213 found that running from source reduced their crashes significantly, so if your problem is similar, this may be something to think about. >>18234 >>18235 Thank you for this report. I am sorry for the trouble. It looks like there are several odd problems here. It is useful to know that 498 improves things, so I may have, while attempting to reduces crashes in my v499 work, actually made things worse for some users. Can you try something for me? Try extracting a fresh version of v499 to your desktop, boot it up, and only import some jpegs. Is that stable? If you then import some videos and load them up, does it suddenly become unstable? The other big change in v499 is a new version of mpv, and several users are having crashing problems with that. A crash can happen minutes after seeing something in mpv, so I would like to figure out if your problems were actually time-bombs planted by casual mpv loads earlier on. >>18241 >>18248 In doing the janitor workflow improvements to cap out this year, I'm hoping to attach a tag filter to the PTR. The janitors will be able to say 'no filename: tags' and similar. As I deploy that tech, I'll be figuring out efficient database-level scans of tag filter rules, which means I'll hopefully be able to deploy the same thing clientside and let users say "hey I want to process the PTR, but only get me 'series', 'creator', and 'character' tags". This may be a nice way to get lean versions of a PTR process. We did some pie charts a long time ago, and the PTR is roughly 33% each of: - series/creator/character - unnamespaced tags - other namespaces So if you only wanted S/C/C, you'd be 'just' a 600 million mapping process and a 20GB db space investment. At least ideally--I don't know how the db tech will work yet, but I'd like the ability to skip the shit you don't want. >>18255 >>18256 No problem. I don't know how the hell .command files do it, but if you are working old school .bat like I do, go: set QT_API=pyqt5 client I had to figure it out for that QT_API stuff when I was testing Qt6 and needed a way to select different libraries to load. You have to do it with a batch, as far as I could find a shortcut can't do it in the one step. So for you I think you are going: set QT_ENABLE_HIGHDPI_SCALING=0
[Expand Post]client Yeah, I think this should be added to the help. I'm happy that I'm obeying UI scaling rules 'better' now, but I personally hate how all applications look at >100% so I have to run all my monitors at 100% anyway. Since you mentioned Chrome, when I was a dumbass nuking my eyes on a 17" 4k laptop screen at 100%, I used an add-on on my browser to force some weirdass zoom rules, like text was zoomed at 150% but images were still 100%. Maybe a similar thing is still available these days? As a side thing, I'm hoping in the future to have prettier thumbs at >100% zoom. There are technical reasons that I have to do some bilinear scaling for now, so they get a bit gritty, but I'll work on the code over time and divorce image size from virtual display size and that should let me generate actual pixel-for-pixel sizes that look good.
New General here: >>>/t/9763 This one should be migrated to /hydrus/ soon. Thanks everyone!


Forms
Delete
Report
Quick Reply