/t/ - Technology

Discussion of Technology

Index Catalog Archive Bottom Refresh
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

You may also be interested in: AI

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #10 Anonymous Board volunteer 07/24/2024 (Wed) 20:55:28 No. 15721
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Users can choose to download and share tags through a Public Tag Repository that now has more than 2 billion tag mappings, and advanced users may set up their own repositories just for themselves and friends. Everything is free and privacy is the first concern. Releases are available for Windows, Linux, and macOS, and it is now easy to run the program straight from source. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . Hydrus is a powerful and complicated program, and it is not for everyone. If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/ Previous thread >>>/hydrus/21127
Edited last time by hydrus_dev on 08/27/2024 (Tue) 02:53:42.
(1.15 MB 680x579 buri23.gif)

You do amazing work Hydev. Thanks to you I can find any file I need to post in a split second.
https://www.youtube.com/watch?v=L5dcODmclFU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v584/Hydrus.Network.584.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v584/Hydrus.Network.584.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v584/Hydrus.Network.584.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v584/Hydrus.Network.584.-.Linux.-.Executable.tar.zst I had an ok week working on some small jobs and new Client API commands. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I fixed a bug that was allowing wasteful file re-downloads from Pixiv and Twitter. I accidentally left a hole in the recent changes to the URL 'neighbour-testing' logic, where it tries to determine if a 'already in db/previously deleted' URL determination is trustworthy, and sites where posts can have multiple files were not able to return 'already in db' or 'previously deleted' until the file itself was redownloaded. I have filled the hole in--thank you for the reports, sorry for the trouble, and let me know if you notice anything else weird going on. This is a weird thing, but in the same way that if you double-click a tag in the normal search page, it adds that tag to the search, if you ctrl+double-click a tag, you enter '-tag'. It was difficult to pull this off due to ctrl+single-click doing a deselect (usually it was best done with ctrl+enter on keyboard), but I've smoothed out the click selection logic and it does, a bit more, what you think it should. Give it a go! If you ever use 'regex' 'system:url' predicates, they may run significantly faster this week if you mix them with other search predicates. regex URL preds now run absolutely last in the master file search, so they benefit if tags or other system predicates have already reduced the search space. Regex URL search is pretty much the worst-performing part of all hydrus file search, and there isn't much I can do about it, but I did a little research this week and I have a couple of ideas for the future--it might even allow for sometimes-fast regex tag search, but we'll see. client api Everything in the hydrus 'pending' menu (e.g. for sending mappings to the PTR) is now available on the Client API, with a new 'Commit Pending' permission needed to do it. A new command in the file relationships set also lets you remove all the potential pairs for files. next week I will make the tag siblings/parents dialogs load instantly. This job keeps on getting put off, but I am determined to finally clear it, even if it takes a couple weeks.
Given the recent e621 bullshit, and the potential for more stuff like it to happen there and elsewhere, one thing I'd like to be able to do is select an image in hydrus and somehow feed it a URL to tell it "Hey, even if you can't find this image here (because it was deleted or w/e), the tags and other data from this url belong to this image"
How do I set import options to only exclude deleted files from a specific file service? So I can have my SFW and NSFW importers separated but using the same database instead of two.
(96.21 KB 600x337 it's fact.jpg)

>>15723 >Thanks to you I can find any file I need to post in a split second Yup. Shitposting was never easier than now.
>>15724 >so they benefit if tags or other system predicates have already reduced the search space Does Hydrus do the same with tag searches?
>>15725 Can't you already do that with the url manager? >>15726 I think you can't do that.
>>15730 >Can't you already do that with the url manager? Not that I can tell. I'm pretty sure that just adds a URL to the list of URLs associated with a file, it doesn't pull tags, etc. from urls put into it.
>>15726 Are you using the 'import folders' feature? So you could add a SFW folder and a NSFW import folder and edit the 'import options' for each. this is a button in the center of the 'edit import folder' window when you create one or when you edit it, which might be hard to see. once clicked, you press the button in the new little window that pops up and change it to 'set custom file import options only for this importer'. there you can change the import destinations = local file service to which you like and also deactivate or activate the 'exclude previously deleted files' checkbox. with two import folders that would work for you?
>>15731 Oh, that's what you meant. Yeah that would be useful.
>>15724 >This is a weird thing, but in the same way that if you double-click a tag in the normal search page, it adds that tag to the search, if you ctrl+double-click a tag, you enter '-tag'. It was difficult to pull this off due to ctrl+single-click doing a deselect (usually it was best done with ctrl+enter on keyboard), but I've smoothed out the click selection logic and it does, a bit more, what you think it should. Give it a go! Thank you very much! I noticed one think that might not pop up in normal usage and only occured to me because i was testing this stuff: If the search pane has so many tags that the box is full and you have selected all tags, you kinda can't unselect any in a nice way. If you have only a few, you can click into blank space underneath to unselect, but if that space isn't there, the only way i know right now is to ctrl+left click one tag to unselect it and then left-click the same again to select the same and unselect all others. Would it be possible to allow the Esc-key to unselect all tags, just as it is possible to unselect selected files by pressing Esc? Alternatively an optional 'windows click behaviour' checkbox , which we spoke about some days ago, would be a way to solve this too -> > If you want a naked click to deselect what you didn't click, like Windows File Explorer does, I can write logic for that.
Not sure what the plans for the media viewers animation scanbar are, but i'd like to have some things that are missing at the moment: 1. Changing colors of the animation scanbar or even better make it optionally transparent with only a thin border, like Windows Media Player 12 has for example. Right now it is so bright, that i can barely see the frames/times. Only when i pause it becomes grey so i can see it well. I'd rather have it a darker color i think. Transparent might be cool and probably i wouldn't hide it and make it bigger at the same time. But i'd had to see it first to be sure. 2. Would it be possible to make it also optionally so, that if you chose to not hide the scanbar, the scanbar would be under the video and not overlap the lower part of the video, like in the VLC not-fullscreen view. that would mean a video with the same resolution as the monitor would have to be scaled so that black bars appear left and right (coz it would need to leave space underneath for the scanbar depending on the size you have chosen in the media viewer options), but that would make the frames/times always visible while the scanbar wouldn't overlap the video. 3. Maybe thats only a me-problem but i can't count how often i want to skip the time on the scanbar/timeline but dragged the video away xP. Maybe a checkbox for deactivating dragging for all animation/videos that have a scanbar? See it as an accessibility option for special people like me lol. 4. How about mouse wheel roll for fast-forward/backward in adjustable seconds (in options), while the mouse is on the scanbar? VLC also does that. Although i already see me skipping files accidentally like an idiot, id like to have it :D Those things added could change the viewing experience by a bit i think, at least for me, thanks!
Hi! One thing I noticed on v581: It was possible to add an artist to the different subscriptions by highlighting all subscriptions and clicking edit. That button is greyed out. It's now a little less convenient to add a new subscription across different sites.
>>15737 You should be able to change the scanbar color by editing a qss theme in the \static\qss folder. You can make a copy of the theme you use and edit that, then select it in hydrus, so that it won't get overwritten on update.
>>15739 Thanks, i might try to play with that. Is transparency possible? 'Background' color is the main color of the bar only while playing but not while paused i assume?
>>15740 No idea. Maybe try if rgba() works, but I haven't tried. https://www.w3schools.com/cssref/func_rgba.php Also no idea if you can change paused color, but that one doesn't seem to matter much as when you move your mouse away for the bar to hide, it will then use the other color again.
>>15741 >Also no idea if you can change paused color, but that one doesn't seem to matter much as when you move your mouse away for the bar to hide, it will then use the other color again. That seems to be a bug only when you have the checkbox for 'no, hide it' activated. i noticed that too. but as long the bar is only shrank and not hidden when moving the mouse away, it will stay gray. Thanks for the rgba idea.
Not sure if this is intentional or not, but clicking apply from the filename tagging window causes the import to commence, closing itself and the file import window, instead of saving the settings and closing the window, like most other windows in hydrus.
What happens to files freshly removed out of hydrus' trashbin when the client is closed quickly (while the files are being moved batch for batch into windows' trashbin)? Do they become orphan files? I wanted to delete 200 files, did not wait and just closed the client. In the end there appeared to be 35 files in the window's trashcan and running the orphan file search queue, it found 14 files only.
>>15744 And the 200 files in hydrus' trashbin are all gone? Interesting.
i need help with a problem. i moved my db to an external ssd, but there's a problem wherein the ssd improperly ejects whenever my pc goes to sleep, which i had just kinda been ignoring since it never really caused a problem (i know, stupid). my db got corrupted, but i had backups so it was all good. i made a backup just before moving it to my external ssd, but in the time that i migrated my db to that external ssd and i found it corrupted, it had imported some new files (from an import folder). i set my imported folders to delete the originals, so now these files are mixed in somewhere in my hydrus files, but i don't know where. i still have the old corrupted dbs. both mappings and caches are hit, and trying to recover them hasn't been very successful. i tried cloning them, but it doesn't work (my shell just spams Error 11: malformed disk image). is there another way i could locate those files. i want to reimport them. thanks whoever helps
>>15746 https://8chan.moe/t/res/14270.html#15144 > Check 'help my media files are broke.txt' in the db dir for info on how to resync it to your current client_files file storage.
I recently had to re-add Hydrus Companion to my browser. I copied over the API access key and it seems to be working for regular urls, but I'm not getting the cookies right, which I need to download things from Pixiv. Before, after clicking the button to send cookies to Hydrus, it would pop-up a notification confirming they were successfully sent to Hydrus. I can't recall if this notification appeared in the browser or in Hydrus, but I'm not seeing it anymore and trying to get urls from Pixiv that are r-18 fails if I'm not logged in outside of incognito mode, which means it won't work for subscriptions that I need to run in the background while I'm not logged in at all. Is there something I'm forgetting? I recall last time my mistake was attempting to send cookies from an incognito tab, but this time that's not the case.
>>15745 I've tried it again with a new batch of 145 files at ~300MB, closed the client as the files were removed from the trashcan, 18 made it into the trashbin until the client fully exited. Whilst exiting, the files were moved there very slowly, at one file per second whereas it would normally fill the trashbin at ~7files per second. Restarting the client show an empty in-client trash. 127 files are ... where? I'll check if they remain in the folders still.
>>15750 Nevermind, they have been deleted a while into this new client's session. Darn, I could swear I was up to something.
>>15723 >>15727 Hell yeah! >>15725 Interesting idea. I have a plan to rework an old system called 'lookup scripts'--which basically fetched a booru's list of tags for suggestions in the manage tags dialog--to instead just hit up known URL pages and grab all the normal metadata the downloader fetches, for retroative fetching of tags, post times, source URLs, whatever. Your thought here, to do it even on pages where the file differs, may slot into this sort of system. >>15726 >>15733 Yeah, you can set up an import route that positively imports to a certain place, but you can't do clever filters that filter import destinations based on individual local file service deletion records yet. I'll keep this in mind, but I don't think I'm ready to go this clever yet. >>15729 Yeah, pretty much all the different search routines take the work of the previous routine and use that as the base instead of the total possible file domain (so it might do 'system:has rating xxx' on the 1,200 files already positively matched, rather than the whole 700,000 file local domain). Tags are fairly early in the process though, since they tend to be very simple and fast. I try to do specific tags before namespace or wildcard tags. The general preference, obviously, is to do the simplest and most specific search procedures first to collapse the search domain down as fast as possible. Adding a specific low-count tag to any search will usually speed it up massively. The code is a mess and I do some unusual bullshit to wangle some id-based searching, but most of the method calls are english, if you are interested in the current order of operations. I haven't rigorously tested this order though, it is mostly just estimation/intuition and profiling/reorganisation when things do become a problem. Some things like OR predicates have multiple opportunities to fire, based on more complicated logic: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDBFilesSearch.py#L1293 starts for real here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDBFilesSearch.py#L1414 Note that I do pretty much all of the search predicates separately and regularly dip out into python. A more usual database program might try to combine these into a single giga-query, but I only have time to KISS. >>15735 >Esc-key to unselect all tags Great idea! >>15737 Thanks, interesting ideas. I'm hamstrung with some of this stuff, usually the bells and whistles like transparency, since the media viewer is already held together with duct tape and string. I could see these things in future, but I have a backlog of very ugly code behind the scenes that I need to fix first. I'm investigating another mpv-embedding technique (a Qt-OpenGL thing) that may shine light on this situation and make it all more stable for everyone or may make things even more complicated and forestall new bells and whistles--we'll see how it works out. I can absolutely write some options to disable dragging on media with a scanbar, great idea. Mouse-scrolling over the scanbar also sounds totally doable.
>>15738 Thanks. I've been reworking my multi-column lists across the program to have better select/sort/scroll-to tech in the past few weeks, and it is going great except that I've decided to mostly move to 'edit one thing at a time' for technical reasons. Can I make your workflow easier by, say, adding a 'paste-to subscriptions' button? I keep meaning to add this for my own use, since I hate having to 'go into' a sub to do 'paste queries'. I could add that so one can paste into a sub just from looking at it, and for your case I could allow that to work on multiple subs at once. >>15743 Yeah, this is an ancient workflow. Were I making it again, I'd probably have that dialog be something you can go in and out of. It was intentional then, but I don't like it a lot now, so it is a candidate for rework. What would you like to see in a future version of this whole workflow? Showing tags and stuff on the initial drag and drop window, and then when you come 'out' of filename tagging it updates? Combining the whole thing into one dialog? >>15741 >>15742 The rgba won't work, I think, btw. That colour goes into a custom rendering thing I do (and thus do alpha on the underlying bitmap default, which will be all black or, perhaps, static), rather than setting the native actual colour of the scanbar. Qt doesn't generally 'do' transparency, when we are talking widgets overlapping, although I am no expert. I'll see what is going on with the paused thing not updating correct. I think for the paused colour it just like takes the current colour and reduces the brightness by 20%. >>15744 >>15745 >>15750 >>15751 >What happens to files freshly removed out of hydrus' trashbin when the client is closed quickly Yeah no worries these days, I put the files on a durable database 'to delete' table, and they are deleted over time in the background (and a sudden intervening re-import in the next few seconds does cancel that list). If there remains work to do, it continues working the list a few seconds after boot. If you have orphan files, they may be from a borked delete, but my guess is the delete was actually borked by something like a program crash that, let's say, succeeded in saving the file delete but failed to save to that 'to delete' table. There's a few gaps like that still in the program, but generally the normal 'physical delete' is reliable and you can close the client any time and not worry. Let me know if you discover any more problems though! >>15749 I am not sure, but if it helps, you can check what cookies hydrus thinks you have under network->data->review session cookies. Might be worth clearing everything under there for 'pixiv' in case there is some clash, but I dunno if Hydrus Companion cares about that sort of thing. Maybe there is an excess cookie somewhere messing up the logged in session.
>>15752 >I can absolutely write some options to disable dragging on media with a scanbar, great idea. Mouse-scrolling over the scanbar also sounds totally doable. Awesome, sounds good! How about color change of the scanbar? Do you suggest also editing the color through the way >>15739 suggested, or do you think you could add changing the color of the bar in hydrus options itself? Isn't it too bright for you guys too so you can't see the frames/times?
>>15749 >>15753 >I am not sure, but if it helps, you can check what cookies hydrus thinks you have under network->data->review session cookies. Might be worth clearing everything under there for 'pixiv' in case there is some clash, but I dunno if Hydrus Companion cares about that sort of thing. Maybe there is an excess cookie somewhere messing up the logged in session. Seems to have done the trick. I had 20 cookies on Pixiv and now I have 15. No idea why the cookie confirmation message no longer shows, but both manual downloads from an incognito page and subscriptions with no currently logged in browser for r-18 content are functioning now.
>>15753 >I'll see what is going on with the paused thing not updating correct. I think for the paused colour it just like takes the current colour and reduces the brightness by 20%. Well, it makes the bar brighter for me, so that's not it.
>>15754 Yeah, edit in the QSS for now. That stuff is still a little prototype--I only just added the stuff in the options->colours page to QSS the other week--but that's the way we are broadly going in future. Make a copy of the stylesheet you like with a new filename in the /static/qss directory and then edit the colours for the scanbar stuff. Then set that stylesheet in options->style. Now we know this way of sending colours from QSS to hydrus proper works well, I think adding a pause colour is a good idea. I don't know when it will happen, but I can expand this to have more colours. I don't know what it uses for text, I guess it uses the default for the current stylesheet, but if you want whiter text or whatever, that might be something else to play with. >>15756 Yeah I think it goes 'if it is dark, add 20%, if it is light, reduce 20%'. It is just some dumb old 'get an alternate colour' method I made a while ago, I use it in the duplicate filter too. We can do better.
>>15748 thank you sm. can't believe i didn't check that.
>>15757 >Yeah I think it goes 'if it is dark, add 20%, if it is light, reduce 20%'. It is just some dumb old 'get an alternate colour' method I made a while ago, I use it in the duplicate filter too. We can do better. But thats good. I worried that if i change it in the QSS to something very dark, the 20% darker while paused wouldn't make it look much different. If it will make it brighter then as you say, thats actually good to distinguish. Have a nice Sunday.
(6.68 KB 365x130 cookies.jpg)

>>15749 At least on LibreWolf on Windows, the notification comes thru as a system notification from the browser. If you've done something to hide/disable windows notifications for your browser, that may be why they disappeared. Pic related.
>>15760 That's probably it. I stripped a lot more out of windows this time.
>>15724 > >I will make the tag siblings/parents dialogs load instantly oh nice I was just gonna ask if it was normal for the manage tag sibling dialog to hang for like 10 secs on open every time
(8.49 KB 640x480 Oekaki)

>>15195 >.... you do have backups right? Right??? Surely you must be 16 to post on this forum. >Life pro tip: Always RTFM. Read the fucking manual. don't care didn't ask kys.
(13.57 KB 646x485 2024-07-28_062921.png)

(207.23 KB 1920x1080 anime girl - giggling.png)

>>15764 >oekaki Saved.
>>15763 Hope you lost your files, child.
In a downloader's file log, clicking an entry focuses the cell, so pressing Ctrl+C will copy usually pretty useless contents of that cell instead of the urls of the selected entries.
Just updated to latest from v558 and i cant find the option to switch back grouping namespaces in the left side of the media viewer and manage tags permanently. Would appreciate any help.
>>15768 For me default they are grouped in the media viewer already, means first come the namespaced tags then the unnamespaced. But there was never an option to change the sorting of tags in the media viewer itself as far as i know, only in the thumbnail viewer. And if there was (which wasn't afaik) then why should an update change your settings? You can change the tag display in 'tag' -> 'manage tag display and search...', then chose the local tag service on the top tabs and then press the 'tag filter for single file views' button (single view = media viewer; multiple file views = thumbnail viewer) to whitelist/blacklist tags that you want to display in the media viewer. Is this what you are looking for with 'manage tags permanently'?
>>15769 >But there was never an option to change the sorting of tags in the media viewer itself as far as i know, only in the thumbnail viewer. It's all under options > sort/collect
>>15770 Oh indeed. Guess i have once set it up and never looked at it again. Thanks for the reminder!
>>15770 That did it, thank you very much. Dont know why updating changed those options though
>>15772 I think they weren't split before, so when the media viewer options got added, they were set to default.
>>15766 they're mixed in the Grand Hydrus Disorder, at best.
>>15753 >Yeah, this is an ancient workflow. Were I making it again, I'd probably have that dialog be something you can go in and out of. It was intentional then, but I don't like it a lot now, so it is a candidate for rework. What would you like to see in a future version of this whole workflow? Showing tags and stuff on the initial drag and drop window, and then when you come 'out' of filename tagging it updates? Combining the whole thing into one dialog? Honestly with how relatively complex the the file import process is if you do anything more than simply import a bunch of untagged files, it may be more prudent to break the file import process up into a multi-step wizard sorta thing. I imagine the process would work something like this: When a user drags and drops files onto hydrus they would be presented with a dialog similar to the review files to import window now, essentially just a list of files hydrus found to import, but with two check boxes: one for sidecar tagging, and one for filename tagging. If the sidecar tagging box is checked, the user will be presented with the sidecar tagging options in the following step. If the file name tagging box is checked the user will be presented with the filename tagging options next. Finally a confirmation dialog of sorts, listing all of the files and the tags that would be added. Could also add some maybe sometimes useful options here too, like publishing imported files to a specific page. Either way though I would advise against condensing the whole thing to a single dialog, doing so may make it rival the options menu in complexity...
>>15681 https://8chan.moe/t/res/14270.html#15681 Thanks for this. I've found two files on the btrfs with the most errors that are only partly readable, and then Input/output error happens. So it reports errors for smaller parts. There is an option that sounds like it could help recover some files before ditching the filesystem, but it seems dumb. --init-csum-tree create a new checksum tree and recalculate checksums in all files WARNING: Do not blindly use this option to fix checksum mismatch problems.
is it only on my machine or is wayland support kinda fucky? i'm running it on KDE. i need to launch it with QT_QPA_PLATFORM=xcb or i get all sorts of UI bugs (and mpv doesn't work).
(296.57 KB 800x600 WS56.gif)

>>15778 >Wayland + Plasma >mpv doesn't work I'm running on X11 and every time I ran into trouble, the solution was to build Hydrus from source. Not a big deal, give it a shot. https://hydrusnetwork.github.io/hydrus/running_from_source.html#walkthrough
I had a fantastic week figuring out tech to make the 'manage tag siblings' dialog boot and work faster, even if you have hundreds of thousands of pairs. I now have a 98% working prototype that does everything--and a few workflow improvements--without needing to load up all the pairs on every boot. Rather than do a release tomorrow, I will copy this work to the manage parents dialog, polish the whole thing, and catch up on other work besides for next week. v585 should be out on the 7th. Thanks everyone!
>>15780 as someone who adds relationships very often, this has been one of the biggest enduring painpoints for me. It'd be amazing if it was no longer an issue!
(3.67 MB 1280x720 Enver Hoxha.webm)

>Rather than do a release tomorrow, I will [...] polish the whole thing The only thing better than a release is no release, because our installs stay safe from a potential break and the next release will be even better.
is there a way to send multiple selections to external viewers?
(98.83 KB 200x267 raughs.png)

>>15778 >wayland >kinda fucky no! I cannot believe it!
(582.74 KB 1000x850 4wnUCDe.png)

>>15782 That's because your genes are expressing a tendency for low impulse control, You see a new release and feel the urge to install it. Perhaps race-mixing was in your family lineage and nobody told you.
>>15780 >>15781 yeah this is a huge change for me, can't wait for this
hello sorry i am retarded. i tried following the instructions but no tags seem to show up even though i followed the settings. i thought hydrus would automatically add the tags when importing.
>>15791 Are you importing manually from your hard drive or from the internet?
>>15792 i thought that you could get the tags from danbooru, etc and then tag them with those
>>15721 Is there an option to change the layout of files page? I want to have preview window on the right, search and tags on the left, and wall of thumbnails in the middle.
Been using hydrus for a minute, but haven't used the gallery DL features. Trying to parse an exhentai URL, but it only seems to support e-hentai? Is there no downloader for exh?
>>15797 You can find one here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts Not sure if it's up to date though, I remember they changed the image download address a bit so I had to modify it.
>>15798 TY for the response. I did import that, but it still doesn't appear to be working. I'll play with it a bit later.
>>15791 I assume you mean import of local files, read this: https://hydrusnetwork.github.io/hydrus/PTR.html#janitors Downloading PTR will take you probably 2 weeks or so and you need at least 50GB of local space. >>15796 You can't put stuff on the right, but why not get rid of the preview window alltogether, since thumbnail are big enough (plus you can resize them in options) imo and just have the system predicates and tag selection box on the left. 1) options -> guy pages -> hide preview window (last checkbox) -> activate 2) options -> search -> autocomplete dropdown floats over file search pages (first checkbox) -> deactivate 3) directly under "2)" -> autocomplete list height -> i got 16, but you can play with it for depending on your resolution 4) options -> thumbnails -> play with the first options so you dont actually need a preview window anymore, otherwise just get used to mouse-wheel click a thumbnail and it gets opened without needing to double click And if you grab the side of the search pane, so you can make it bigger and the thumbnail grid smaller (or vice versa), you can make it so that the last thumbnail in a row just doesnt fit into the grid and therefore a free space with almost the width of a thumbnail will exist, which makes all thumbnails look "kinda" centered. Not exactly what you want i guess but a bit, minus the preview pane. >>15797 Does it maybe have to do with cookies and login, since in a browser you need to be logged in in e-hentai before you can see exhentai too? I don't use those functions though, so just a suggestion. network -> data -> review session cookies network -> logins -> manage logins
>>15799 Try pasting the following into the content parsers tab of "ex/e-hentai.org post page parser": [30, 7, ["urls to source download (new format)", 7, [27, 7, [[26, 3, [[2, [62, 3, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "href", [84, 1, [26, 3, [[2, [51, 1, [2, "https://e(-|x)hentai.org/fullimg", null, null, "https://exhentai.org/fullimg/1264932/1/1dojppu94mr/001.jpg"]]]]]]]], [7, 50]]]
I happened to notice a ton of the files in my db folders have no file extension anymore. Is there any reason Hydrus is stripping file extensions from some files? I tried looking up the hashes of a couple and Hydrus returned nothing. I can't tell what they are since no file extension means no proper thumbnail in my file browser, and I tried adding a bunch of different file extensions to one of them to no avail. Some files I blanked out so I could post unspoilered.
oh no that's too much i just wanted to import my collection and then easily tag them from tags that already exist like through boorus but it's ok i'll just try and do it manually lol
>>15804 >oh no that's too much >i'll just try and do it manually lol Welcome to autism land. You are about to undertake a task that is far more time consuming then just getting the PTR if your computer can handle it, though depending on how many files you have. Yesterday marks two years since I started using Hydrus and manually tagging files. I project I'll catch up with my pace of file downloading within the next half year. Whoops, forgot the graph key.
>>15804 It's possible, but the files need to be exactly the same as the ones on the booru or it won't find anything. https://wiki.hydrus.network/books/hydrus-manual/page/file-look-up I also recommend reducing the wait time between gallery fetches to 1 second for this in options > downloading, as the default is like 15, which takes forever. You may want to change it back again if you plan on using multiple normal (ie. not 1 image per gallery like in this case) gallery downloaders at the same time in the future.
>>15806 >>15807 ok thanks for the tips i'll see what i can do yeah i love to collect pictures but it's just a hassle the way i have them organized now hopefully i can get hydrus to work for me
>>15808 If nothing else, you can easily replicate your file structure as tags in Hydrus, losing nothing while still gaining some of the benefits of Hydrus with minimal effort.
>>15803 Lol. I'm pretty sure those are the 'repository update' files which you are looking at. Activate advanced mode under 'help'. Then change your file domain to 'repository updates'. Here you can see them. Right click one -> open -> in file browser. Those are not your normal files. I guess you can delete them if you processed all the repository updates already. But don't quote me on that. Hydev once said that afaik. >>15804 Beside the link from >>15807 , you also can get tags from every new stuff you get through the in-client downloaders, means that you have to directly download them into hydrus and not some windows folder before. network -> downloaders -> manage default import options... Here you can setup if you wanna get tags and notes or not for each website or in general for all (upper right 'import options' buttons). You have to activate the 'get tags' checkboxes for the tag domain you want to have them first (after going into a downloader or the buttons on upper right top) and if you also want to retroactively apply the tags to a file that you already have in hydrus, but you found the link for it again, then you also should activate the two 'force page fetch even ... already in db' checkboxes. Otherwise it will not get tags/notes for old files. Then you can activate the url watcher. that will send your links after you right-click -> copied them from the browser adress bar, into hydrus, automatically: network -> downloaders -> watch clipboard for urls -> activate one or both options there whichever you need Or you can drag&drop the URL from the browser's adress bar directly into hydrus too. That way it you can have tags for your new files (often faster than PTR, since you have to rely on people to upload them to the PTR first) and also for old files (in case you find them on one of the websites that can be parsed with the in-client downloaders).
>>15810 >Then change your file domain to 'repository updates'. Here you can see them Forgot to say, also search for system:everything of course.
>>15810 ok thank you i'm trying that right now :3
also is there a way for the tags that are downloaded from pixiv to be the english versions?
(4.12 MB 889x500 bow.gif)

>>15806 >inbox/archived relation R for respect.
>>15810 >I'm pretty sure those are the 'repository update' files which you are looking at. Yep. Changed the domain to repository update files. There's around 5,700 of them, and they total around 2GB. A bit curious as to what they do and why exactly they need to mixed in with the db files.
>>15814 I just do it in chunks. I still have several thousand files left in traditional folders, on top of those I'm adding with more frequency due to how Hydrus makes collecting easier.
>>15815 Oh, seems i forgot to explain better. PTR = Public Tag Repository = your repository files most likely. But you can also add other repositories, which most likely you didnt. So those should be the PTR update files. You are connected to the PTR right? services -> review services -> remote tab -> tag repositories subtab -> PTR All the updates you download are the files without extension you mentioned and in the 'repositroy updates' domain. They have to be downloaded first before they are processed, which ends up as updated tags+siblings+parents. After processing they stay in the folders and the domain. There is a button to use those files for example on other clients, i guess without the need of being connected to the PTR too, but not 100% sure. But then they still need to be processed into the database: services -> import repository update files... (on bottom)
>>15817 >services -> review services -> remote tab -> tag repositories subtab -> PTR And here they can be exported by the way, with the 'export updates' button kinda in the mid right.
>>15800 Thanks, but making thumbnails big is not ideal, since they would take up a lot of space, and I have a lot of images. Just thumbnails DB's size was ~50 GB. Previously, with FastStone or XnView i'd look at small thumbnail, then at quite big preview if I was interested, and then open it fully if I was further interested. Tag and search interface doesn't need a lot of width, so it's almost pointless to widen it, only for constraining thumbnail grid, but for the preview a separate column in interface would be very handy.
>>15819 >Tag and search interface doesn't need a lot of width, so it's almost pointless to widen it, only for constraining thumbnail grid, but for the preview a separate column in interface would be very handy. I agree that a right column where you could put in the tags box or preview would be cool. that would kinda 'center' the thumbnail grid. Or that column could show other content directly. maybe the notes or even better: alternates after you click on one thumb in the main grid. Right now if you grab the edge of the search pane, you can make it disappear by dragging it completely to the left. Why not having a right column that you can drag to the left to make it appear (and also add it under the existing 'show/hide sidebar and preview panel' and call it 'show/hide right/alernate sidebar' or so), maybe with some tabs to change what is displayed like tags, notes, alternates, preview etc. Who knows what the future brings :)
>>15817 >You are connected to the PTR right? No. I attempted to once before I realized I didn't have enough computer. Later I decided I didn't want other people's tags, only my own personal tags. I believe I had taken the steps necessary to delete what portions of the PTR I had downloaded since it was a waste of space. >services -> review services -> remote tab -> tag repositories subtab -> PTR The remote tab is entirely blank. However there is pic related.
>>15821 Having a PTR is good as a reference i think. Of course 80GB+ is alot, but it MIGHT be worth it. The PTR has its own tag domain and you still can have your own tags in the 'my tags' domain and add more domains if you want. There are ways to even mass migrate only certain tags tags you want from one domain to another, lets say PTR to 'my tags', with help of black/whitelists, which can save you alot of work/time. It looks like you can delete those 2.18GB i would assume if its annoying you.
>>15810 >if you also want to retroactively apply the tags to a file that you already have in hydrus, but you found the link for it again, then you also should activate the two 'force page fetch even ... already in db' checkboxes. Otherwise it will not get tags/notes for old files. You don't need to do that if the file doesn't have the source url attached to it yet. I retroactively get tags for "blank" files all the time and it just works.
>>15823 I had to activate those 'force page fetch...' checkboxes, otherwise the tags wouldnt get added for old files which have the same hash as the media of the link you drop into hydrus. Even if the 'get tags' checkbox is activated. Thats why the checkboxes are there in the first place. Are you talking about 'blank' files getting tags, because you use the PTR and thats where they get the tags from? Otherwise i can't follow you and what you mean with 'source url attached to it yet'.
Dunno if this would be useful for anyone else but how viable would it be to add a toggle to have folder imports go in order of date (last modified, created etc)? Use case being a batch of files being downloaded elsewhere outputs in order by date, but the filenames are hashes so the default view on Hydrus has everything mixed up. My current workflow to get around this is just renaming them all in sequence based on their last modified date.
>>15824 I don't use the PTR. By blank files I mean fresh files that have no tags, urls etc. yet, or have a little. What I usually do is that I download an image from something like twitter, which only gets the creator tag and twitter post + file urls, then I copy the md5 hash of that file and paste it into a booru gallery downloader, which will grab the tags and urls of that booru without even having those 'force page fetch...' options checked. Works even if you find the image on the booru yourself and paste the booru url into a normal url downloader manually. Sometimes I even download the raw image straight from the booru itself (because of a script I use in my browser that makes it easier to get file links than post links) and then retroactively pull the tags using the same method. I only have to check the options if the file already has a post url of that booru attached to it.
>>15825 Maybe it would be good to show a 'modified date' column, yes. Right now Hydrus does allow you to sort the import sequence only for name, filetype and size. It doesn't parse the creation and modified dates when showing you the files in the pre-import list it seems. I think only after you press the 'import' button, Hydrus parses the modify dates of windows (But not the creation times, since the files get copied to the database folders and would be the same as the import times). BUT i have good news nevertheless (if thats what you want): The pre-import window shows you a cog symbol button near the 'close' button on bottom right. If you click it and deactivate the checkmark of 'sort paths as they are added', the files get added exactly like they are in windows. That means if you pre-sort the files in Windows for 'creation date/modifed date', you select all files within a windows folder and drag&drop them into hydrus. But: - you have to grab the very first file when you drag&drop them into hydrus, not any other file. Otherwise the sorting gets wrong - you cannot sort the files in the folder and then think that you can go back a direction and drop the whole folder into hydrus, that won't work. you have to drag&drop all the selected files themselves, NOT the folder icon where the files reside >>15826 Ok that's interesting. I just said it cause i tested it with a danbooru file. 1) I downloaded it without the checkboxes activated -> no tags 2) Then i checked the 'get tags' checkbox (within the danbooru downloader settings itself, not globally which you can also do) and put the link in hydrus again -> no tags 3) Then i checked the 'force page fetch...' checkboxes and put the link in hydrus again -> tags appeared So i don't know where our differences are. Can you confirm that you haven't activated the fetch checkboxes globally too? See network -> downloaders -> manage default import options -> upper two buttons that say 'import options', so not the individual downloader settings themselves. If you haven't activated them there, i dont know really why that is :)
>>15827 >Can you confirm that you haven't activated the fetch checkboxes globally too? Yeah, defaults are unchecked and individual downloader tabs are set to default. I even tested this in a fresh install and it still works that way. Like for example I download the raw file first: https://cdn.donmai.us/original/23/70/2370ac8fc8c8f0ca843e1c0940df492d.jpg which just gets the file and nothing else. Then I paste in the post page: https://danbooru.donmai.us/posts/7214688 which only gets the tags without downloading the pic again, because it recognizes it. Are you sure the image you're trying to get tags for doesn't have the danbooru post url already?
>>15828 > Are you sure the image you're trying to get tags for doesn't have the danbooru post url already? Yes i think that was it, thanks! Because i downloaded directly to Hydrus without tags and not to a Windows folder, which i should have done, so it got the post url attached. Ok after some trying now i get it. I had a little bit of a problem to understand it first, since i thought when you said 'default' and suggested a 'fresh install', you also meant the 'get tags' checkbox deactivated. But that wouldn't work and wouldn't make sense i guess. But with at least one 'get tags' checkbox activated (depending on how many tag domains you have) and the fetch checkboxes deactivated, it works like you say, yes. As long no url is attached. So for newbies: after you still don't get any tags even after activating the 'get tags' checkbox, try the 'force page fetch...' checkboxes too. In my testing, i had to activate both of the fetch checkboxes. Either one them alone didn't work in case of danbooru.
(550.35 KB 163x153 2004.gif)

How taxing would it be implement a function that shows you how many other people in the same tag repository have a specific image hash? It'd be very useful when gauging whether you should bother uploading to the PTR versus your own local tags, and for correlating between alternates, as well as PTR maintenance for unused hashes, but it definitely should be opt-in only, even if it's just hashes being swapped around. Probably something that only sends all the (selected) hashes you have to the PTR and sends out a simple integer along with every tag mapping for every file of the users that have said hash since last 'census'. I'd similarly like for duplicate detections to be correlated, but that'd be even more complicated and would need some kind of consensus error-checking system.
Got a small feature request for you, Hydev. When selecting "save this search" as a favorite search, can there be a dropdown list of existing favorite searches to save over? It's much more natural to edit a favorite while actually using it since you see the results, and then save this over the new favorite, but that requires either manually typing in the exact name of the existing favorite, or going into the manage favorites window and entering the changes there a second time. I do this regularly as I manage what files I have an haven't posted in various places, and the criteria for what files I think should be posted in a certain place regularly change.
Suppose I download an image from booru and import it's tags, later tags get changed on booru, some deleted, some added. I found the way to get newly added tags, but is there any way to remove deleted from booru tags?
>>15767 Thanks, I think that is probably default Qt behaviour. I will add a hook so it explicitly copies the URLs. >>15775 Thanks, I will keep this in mind. I think you make a good point about avoiding the mega-complexity of a single dialog. First thing I should do is simply rewrite the objects behind all this stuff so it isn't so ugly to work with. A lot of it is just a bunch of tuples flying around in my twelve year old code. >>15781 >>15790 Thanks, let me know how the new workflow works out. I've added the idea of a 'sticky workspace' although I think I've explained it bad in UI. Basically related pairs now stay in the list until you are done with the stuff you are working on, so I'm hoping it will be easier to make multiple edits on the same large group. Let's try it out on some real world situations and see where it does well and badly, and iterate. >>15803 >>15810 >>15815 >>15817 >>15821 Yep if you no longer sync with the PTR, you can delete those files. Just do the 'repository updates' system:everything search and ctrl+a delete, and it should work out ok. I'm still working on better 'clean-up all this excess shit after I remove the PTR' tech, but this will happen automatically in future. The files are just zlib-zipped json, if you are interested. If you open them up in python and zlib.decompress, you'll mostly just get some long lists of numbers and sometimes tags/hashes. I hang on to these files mostly for network isolation/durability reasons. If the PTR ever goes down, a client can still reprocess to fix holes or whatever, and a single client can potentially create a new PTR if that is ever needed. >>15813 This is crazy, I'm not the foremost expert in how Pixiv works, but their fucking API (https://www.pixiv.net/touch/ajax/illust/details?illust_id=121160658) is giving different results to my test browser session and to hydrus. There must be some 'hey my language is english' header that the API is parsing, and hydrus is not sending, and thus it is delivering the translation in the 'correct' language dynamically. It didn't used to work this way (translated tags used to just work); I will look into it, thank you for the report!
>>15838 >Yep if you no longer sync with the PTR, you can delete those files Woot.
>>15816 Based, I kneel. >>15819 >>15820 I can't speak with great confidence since I am always overwhelmed, but I feel like I am just getting the hang of Qt now, and just about clearing up some of the last sticky ugly hacks we employed to get through the wx->Qt transition. There's still more to do (I hope to make multi-column lists populate/sort quickly in the next couple months), but I think I can start seriously thinking about moving to a more dynamic and user-customisable UI within the next few years. I'm a boomer and think of the 2008-tier Eclipse editor when I think of modular UI design, but that's what I imagine in my head. Ideally I want you to be able to anchor and resize stuff in different places; I just have to do a whole bunch more cleanup to decouple all the bad code I've written over the years. That's the objective, and we are slowly getting there. >>15825 Interesting idea--I will see what I can do. >>15832 Impossible with current tech, I'm afraid. The PTR doesn't know who has which file unless they submit a tag for it, and even then it is only assumed (since the user may have since deleted the file, or have acquired the tag/hash combo through some more esoteric means like a Client API action from an external db). Best analogue for 'is this file popular in any way?' with current tech is just how many tags it has. The stuff on the boorus obviously has like fifty tags, whereas obscure memes will just have one. I won't add census/tracking as an opt-in to the PTR since I designed it to be maximally private. I also designed it to be bandwidth minimal, and various 'hey just so you know, I have these files' ideas that would require a client to regularly check-in with a server, either sending info or getting it (as in, with some Anons' ideas, of "hey, can the client not sync but instead ask the PTR for any tags it has for new files?", which is a similar concept), will increase network traffic, and server CPU (and now per-account storage), significantly. I originally planned to have a 'ratings repository' where we'd all submit ratings on files and it'd combine them into a regularly-updated aggregate, but I backed off since I ran into the same issues and it simply didn't fit my model. Shared duplicate info would be similar, and the moderation workflow would be complex. The PTR is cool, and hydrus repositories are neat ways to share content anonymously, but I'm not the guy to make anything even two steps towards aggregated social media. In future I am going to go even more isolationist as we train AIs on existing PTR content and start to drop common nouns from fixed-mapping sharing (e.g. when an AI model can recognise a 'skirt' on a novel image file, there's little need to share explicit (file-tag) mappings on a per-file basis any more). >>15835 Yes, 100%. I hate the 'favourite' search management UI, pretty much top to bottom! I keep wanting to update a search to the current but then remember how much of a pain in the ass it is. This would solve it nicely. >>15837 No, not really. Even if there were an explict record of 'hey this tag was deleted from danbooru' in some API somewhere (rather than just inferring by missing tags), hydrus just doesn't have good 'deleted' pipelines yet. I'm still thinking about all this, but I'd like to put work into it. I want better visibility of deleted content and better ways to pipe it around (and, as you say, explicitly parse it). You'd have to hack something together with the Client API for now.
(105.15 KB 372x284 auto balance in 3...2....png)

>>15840 >In future I am going to go even more isolationist as we train AIs on existing PTR content and start to drop common nouns from fixed-mapping sharing I seriously, deeply doubt that AI will ever get to that point. Yes, it will get better and yes, confidence levels will rise, but efficiency won't plummet to such a level that you could expect anyone who can run Hydrus to also be able to run a local classifier model, especially at a rate that it would be able to classify the thousand to hundreds of thousands of images in a local repository. I may be wrong, AI accelerators may become so ubiquitous that there'll be something wrong with your system if you don't have one, but even that would be a ways off. Even now, classifier-taggers require multiple gigabytes of working memory and a good CPU/GPU to run, and that's with dogshit confidence intervals. To get the 99.9%+ confidence interval required to eschew common tags at all, you'd need a far beefier, more expensive model. And even then, I believe there are fundamental limits on how good a job classifiers can do versus what a user actually sees (like all those thumbnail switcheroo images where the thumbnail is one thing but the full image is entirely different – that will entirely fuck up any classifier, yet both have taggable content)
>>15837 You could delete the tags from the file you are trying to update manually before getting the tags again. Though you will have to also look for an option in tag import options that will ignore deleted tags, or you could clear deleted tag records using the tag migration tool.
>>15841 >efficiency won't plummet to such a level that you could expect anyone who can run Hydrus to also be able to run a local classifier model, especially at a rate that it would be able to classify the thousand to hundreds of thousands of images in a local repository if I'm understanding what you're talking about correctly, I'm already doing it right now, and yeah for vanilla-ish anime-styled artwork (exactly what most of the tags on the ptr are for) it works fantastically. I have a very weak pc as well and it still works, just kinda slow. it's automatic though, so I'm fine with it being slow since I can just let it run in the background and tag for me. getting this up and running was one of the biggest reasons I stopped syncing with the ptr. for me it's just semi obsolete at this point.
(80.46 KB 900x900 efD.jpg)

>>15832 >a function that shows you how many other people in the same tag repository have a specific image hash? A very, very, very bad idea as it will only help glowies to fish for anons with specific files in their drives. Are you in the ZOG's payroll, anon?
>>15843 Yeah, it works great for 90% of anime images. But the weird 10% it struggles with, and the moment you add significant amounts of 3DPD or worse, furry, it becomes unusable. False positives are also a big issue. It *could* be solved with a whole classification stack that identifies the type of image, then chooses an appropriate classifier to minimize resources used, or just one big fuckoff model to make it all work regardless of type, but they have their own problems. All I'm saying is that I wouldn't depend on it compared to a (semi) curated tag repository. The PTR might have plenty of misclassifications, but those can be fixed, unlike a model that will keep making the same mistakes until you swap it out. Direct classifier integration would be nice, but there's plenty of tools that can indirectly or directly interface with Hydrus to populate automatically populate tags.
(454.17 KB 2927x2341 anonfilly - it's shit.png)

>>15841 >I seriously, deeply doubt that AI will ever get to that point. I totally agree. A reminder that there is not such a thing as "A.I." as it is just a marketing ploy to sell a bunch of algorithms arranged in tandem as the ultimate hype while gaslighting humans into believing it is the next step in transhumanism.
>>15846 There is such a thing as AI and always has been, the term itself has just been absolutely annihilated by popular vernacular. Classifiers are going to get better, but not so good that it can replace you, the human looking at a screen and using Hydrus. It'll still be useful, just not a panacea.
In migrate tags, removing pending tags should probably not be called "petition".
>>15846 >mlp >green anon It's shit
>>15849 Uh, it actually petitions, so the tag has (+1) (-1).
(304.74 KB 987x1024 Jughashfilly.png)

Also looking forward to that patched Shimmie scraper. Every so often I find a cute little Shimmie and the new parser is great for quickly adding it in. So thanks for that! >>15847 Anonfillyposter is right. Obviously machine learning exists as a technique and produces tools, but to call it "AI" is absolutely a junk marketing term adapted from science fiction. It's not intelligence.
(434.00 KB 609x573 1340988704721.png)

>>15854 Artificial Intelligence is any artificial algorithm capable of making decisions based on its environment. Its environments are pictures and text. It makes decisions based on those. It's artificial intelligence. A* pathfinding is also artificial intelligence. The bullshit marketing is temporary, but the field has existed long before it, just like quantum science has before quantum anything became the bullshit buzzword of the year. Ironically, the association with science fiction is the most likely source of actually bringing Skynet to life through upheld expectations. AI slop is trained on the very same datasets that morally panic about AI slop. Talk about AI candy.
I am experimenting with FreeBSD and having some issues. I want to have hydrus in that machine and I am wondering if anybody tried running hydrus on FreeBSD. If it doesn't work I can just ditch FreeBSD and install linux.
>>15857 I have problems with BSD itself and I want to know if I should try to solve them.
>>15846 >>15854 >>15855 Good enough poners. Let us not derail the bread and piss devanon off.
>>15857 >>15858 I guess your best bet is trying the source package. Then theoretically speaking, a Venv for BSD should work as it would allow Hydrus to run in a virtual Python environment. I'm dumb as fuck in programming so I cannot elaborate any further. https://hydrusnetwork.github.io/hydrus/running_from_source.html https://forums.freebsd.org/threads/how-to-install-a-virtual-python-environment.92015/ https://duckduckgo.com/?q=can+freebsd+run+linux+programs&ia=web
>>15860 I read about a compatibility layer for running linux binaries on freebsd but I was wandering if somebody already tried something like this so that I don't waste my time on solving whatever is preventing my freshly installed bsd from booting consistently.
>>15813 It is possible, and I had it before, but I removed it and I highly recommend you don't do this. Many (too much imo) of the "english" tags are just flat out incorrect translations, so if you have hydrus grab those, you'll be added tags that either don't apply or don't even make sense.
>>15847 I use an Ai tagger to tag my downloads, and it works pretty good! One thing I like about it, it can recognize untagged loli, and tag it as such. That lets me get rid of it before I start perusing what I downloaded.
>>15813 The only Ai so far that I have seen give decent translations is GPT-4. So, maybe someone could write code for Hydrus using GPT-4 API. You have to pay for GPT API though.
>>15864 Actually, I guess you could just copy them to the ChatGPT online program, and tell the Ai to translate them to English. Then you could replace them with manage tags. But that would be a pain.
>>15863 One problem with the tagger on loli though, it does well on anime / hentai, but not so great on realistic images, as it tends to have a lot of false positives on that.
hydrus removes leading and trailing spaces from tags, but not zero width spaces (​) so it leads to confusing duplicate tags sometimes. I think hydrus can safely trim those from the beginning and end of tags.
>>15862 oh idk i just want simple tags honestly i can't read the tags in Japanese and there's a million of them lol >>15863 how do you do that?
>>15868 Well, I use an old tagger that isnt being shared on github anymore, but I think this one is a child of it. https://github.com/Garbevoir/wd-e621-hydrus-tagger
>>15869 Ahh, I think this is the old one that I am using ( the ancestor of the one above)
>>15871 The main thing to remember is, the lower you set the threshold, the more tags it will try to match to the pic. If your looking to tag loli that hasn't been tagged as such, set it low. It will tag it. Just experiment around with it to find out what you like. I use --threshold .10
>>15861 I guess nobody here knows. If I ever get to solving my freebsd problems and trying hydrus here I will report my findings here or somewhere devanon will see it. If I want this to get recorded where should I write it?
>>15873 >I guess nobody here knows. I guess you're right. You have to wait for devanon to show up and comment. Take into account the niche status of that OS and the stubbornness (aka fossilization) of its community and then you will understand why it hasn't many takers. If you are interested in a fringe OS in the style of TempleOS but with a modern look, then Essence might be your flavor. https://nakst.gitlab.io/essence https://www.youtube.com/watch?v=1PMf3FrFGD4
>>15875 I know that the popularity of *bsd systems lies mainly apple and playstation so no where near open source desktop oses. I use linux as my daily driver. I wanted to have some fun and experiment with a bsd ams experience the "more cohesive system" they boast. Currently stuck on mostly not booting after install XD.
>>15876 Have you gone to the BSD forums to ask about your boot issues. https://forums.freebsd.org
>>15875 >>Links to patreon and discord
>>15877 Not yet. If I can't troubleshoot it for myself I probably will. I haven't done that much yet.
>>15838 Got it, thank you for your work. Also, maybe it will be an improvement for fetching tags of already in db images, but currently it seems like they are fetched for each image individually (e.g. gelbooru gallery downloader). Gallery-dl uses their api like this: https://gelbooru.com/index.php?page=dapi&q=index&json=1&tags=1girl&pid=0&limit=100&s=post Which gives metadata on a whole lot of images at once in json format. If hydros downloaders could use it like this, it would make synching tags with boorus much faster and maybe less strenuous for the site. >>15842 Thanks, bit of a pain to run it manually like that, but I guess it's fine since it's not something that I would do often.
I'm having issues with the media viewer. Debian 12, Hydrus 584 built from source. If I play any video in the media viewer and I'm not using the native hydrus viewer for playback, the transitions for further browsing or zooming get messed up until I close and reopen the media viewer. Video playback and browsing and zooming all still work, but there are some jarring visual glitches when going from one file to another or zooming in or out. When browsing to another file, instead of just showing the next file, for a split second the current file will change position and snap its top left edge to the position of the next file's top left edge before showing the next file. The zooming is harder to describe but it seems as if after zooming, for a split second a section of the previous zoom level is overlaid over the new zoom section. The glitches only occur when I view webms/gifs in the media viewer with mpv or the Qt Media Player. If I use the native hydrus viewer for playback, I get no glitches. Anyone have any ideas? I'd like to be able to use mpv for playback.
>>15882 >Thanks, bit of a pain to run it manually like that, but I guess it's fine since it's not something that I would do often. Maybe i'm wrong, but i think you and >>15842 are talking about two different things. Aren't the options in the image that >>15842 uploaded show 'deleted tags' in the sense of Hydrus deleted tags? For example: - If you download a file with 10 tags from a booru, then delete 5 tags in Hydrus of that file, the activated option 'parsed tags overwrite previously deleted tags' means you will have all 10 tags after download/parsing again, when deactivated you will have 5 since you deleted 5. That is possible because Hydrus saves also deleted tags of files. You can see an option to show them on a file in the 'manage tags' dialog when clicking on the cog symbol. But you are talking about deleted tags from boorus on the website and not Hydrus, correct? >Suppose I download an image from booru and import it's tags, later tags get changed on booru, some deleted, some added. I found the way to get newly added tags, but is there any way to remove deleted from booru tags? Example: - A booru has a file with 10 tags. You download it and you have 10 tags in Hydrus too. Later the booru mods decide to delete 3 tags for whatever reason on that file on the booru. You want that those 3 tags get deleted on your Hydrus file as well, in short: 'updated'. If not then forget everything i said in this comment :P. I don't think there is a way to do that for your (the second) example. If there is, id like to know too. And what is YOUR way to add the extra ones that got added on the booru?
I had an excellent couple of weeks. The manage tag siblings and parents dialogs now load and operate quickly, even when the underlying service has hundreds of thousands of pairs. I also cleared a bunch of normal small work. The release should be as normal tomorrow.
>>15884 Yeah it's not possible in an automated way, that's why I suggested doing it manually, which you could do in bulk once in a long while. For that you will have to delete all (downloader) tags from your selection, then paste their booru urls into a url downloader, but you will need to check that option in the image or the downloader will skip tags that didn't change. So for example you have a file with 10 tags, 1 gets added, but 3 get deleted, so 8 total. If you simply redownload, you'll have 11 tags. If you clear your tags and redownload without the option, you'll end up with only 1 tag. With the option you should get the correct 8 tags.
>>15886 Thanks for the explanation.
https://www.youtube.com/watch?v=LREOmHLII70 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v585/Hydrus.Network.585.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v585/Hydrus.Network.585.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v585/Hydrus.Network.585.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v585/Hydrus.Network.585.-.Linux.-.Executable.tar.zst I had a great couple of weeks getting the tag siblings and parents dialogs to load quickly. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html fast siblings and parents The PTR has been a painful success. It is great, and I am grateful for how it keeps growing, but every time we add ten thousand or a hundred thousand new things somewhere, it lags out some bit of UI where I never thought it would be a problem. Anyone who has tried to work with PTR siblings or parents knows what I am talking about--it can take five or ten seconds, every single time, to load the manage tag siblings/parents dialogs. The same is true for anyone who has programmatically imported siblings from a booru--adding 100,000 pairs can be neat, but editing them manually is then a constant frustration. So! I have rewritten both dialogs, and the long and short of it is they now load instantly. If they need to review some pairs to do logic (like 'hey, does this new pair the user wants to add conflict with the existing pair structure?'), it all happens in the background, usually so quickly you never notice it. We'll see how this actually works on IRL situations, but I do feel good about all this work. There was a lot to do, but it ultimately seemed to go well, enough that I had time for some bells and whistles. Beyond some optimisations and loop-detection fixes, there's a workflow change in that these two dialogs now have a 'stickier' workspace. The list of pairs has typically shown anything related to what you have waiting to be added, and I have now extended that to say 'and they now stay in view after you add'. Whenever you type in a tag, everything related to that tag is loaded up and stays in view while you work on other things. If you want to clear the workspace, you can just click a button to reset. I hope this makes it easier to edit and even merge larger sibling groups. This is all a big change, and I'm sure I've messed up somewhere. If you do siblings or parents a lot, give it all a go and let me know how it works out. The PTR really is huge, and some larger groups may still take a second or two to load--we'll see. other highlights Hitting escape now deselects any taglist! options->media viewer gets a new 'Do not allow mouse media drag-panning when the media has duration' checkbox. If you often misclick when scrubbing, try it out. I pared down the spammy 'added to x service 3 days ago' lines in the media viewer's top hover. It now pretty much just says 'imported, modified'. If you need to see archived time or something, note that the timestamps are still available on the normal media right-click menu, on the flyout submenu off the top row. next week I have lots of small things to be getting on with, so I'll just catch up on my normal queue.
>>15888 man thank you for the fast siblings and parents parents had always been a little slow for me but the sibling dialog took like 15 secs minimum for me and I just downloaded a bunch of images from a drawthread booru and was trying to sibling a bunch of monster types to species:* and it was unbelievably slow going and now I just finished thanks a ton hydev, this has been the biggest gamechanger for me in a while
(36.47 KB 400x400 animated-webp-supported.webp)

>>15888 >other highlights >Hitting escape now deselects any taglist! >options->media viewer gets a new 'Do not allow mouse media drag-panning when the media has duration' checkbox Thank you so much for making those personal additions! I like to imagine a world where not only I do use those but many others too so you didn't spend time only for my personal gain :D I want to report two bugs: 1) I think you fixed that the animation scanbar didn't stayed 'grayish' after you pressed pause on media with duration AND the scanbar was set to hide when the mouse was away. Before it stayed white in that combination. BUT the scanbar stays grayish only for the first time: you press pause let's say in the center of the media -> then move the mouse down to the scanbar -> it appears grayish, which is correct -> you move the mouse cursor up so the scanbar goes away -> you move curso to scanbar -> and from now on it becomes and stays white and the scanbar nub disappears also. Additionally, i found that the scanbar color of animated .webp is aqua or cyan. Is that on purpose? Just tested only two though. Animated .webp attached (not sure if upload will work) 2) when you activate the 'no, hide it' checkbox to hide the scanbar and 'apply', the pixel size of the scanbar defaults to 5px after you deactivate the checkbox again. That means if you had set something you don't necessarily remember like 17px and activate the checkbox and apply, the next time you want to deactivate that checkbox it is set to 5px instead of 17. It doesn't remember it. You once did something to remember the 'how many tags to show in the childrens tab' option in the 'tags' options (on the very bottom), because it was defaulting to 20. Maybe you can do that here too? I actually just tested the 'threshold' option on the bottom of the media viewer settings too. If you activate the 'do not use' checkboxes and apply, then reopen the settings, it changes to 1 on every single one of them, which is not the default really. So the problem is, once you click those and apply, you probably can't go back since you forgot the real defaults. I hope there are not many more checkboxes that have that behavior in hydrus. Going back to defaults is kinda ok i think if it can't be remembered, but the threshold ones are clearly wrong. I did some video recording of my settings back then at least, so im kinda save.
For sites with expiring links, is it possible to make next search/gallery page download always wait until all queued files have finished downloading?
>>15888 Seems like the parent management page is taking ages to load with this new update for tags with lots of children, though only once, and then quickly thereafter. Is this intended behavior?
>>15884 Anon >>15886 explained it, as for adding extra tag, you just check force fetch page in tag importer options.
>>15894 I guessed so, thanks.
>>15888 Hydrus freezes when I add a new parent to a tag now. I can search for tags under the "set parents" section, but as soon as I click on one it slows down until it stops responding. It also freezes sometimes if I use the media viewer in fullscreen mode. Using the Windows version.
>>15892 Actually, it slows again as soon as I try to re-search a tag that recently had new relations added. If I had a slower computer it would probably freeze like >>15896 It takes anywhere from 10-30 seconds to load the relations for a tag that has around 5-30, and it can take several minutes to load the relations for a tag with 30-120. Interestingly, it only took a couple minutes to load something like my genre:action tag's relations, which are around 1,250. Did this update improve loading relations for tags with large numbers of relations, which probably massively benefits the PTR, at the cost of decreased efficiency when loading relations for tags that don't have that many relations?
>>15897 Seems to have started working quickly regardless. Strange. Might have needed to do a lot of preliminary work after switching to the new system?
>>15888 > > So! I have rewritten both dialogs, and the long and short of it is they now load instantly. That's really good > now have a 'stickier' workspace. The list of pairs has typically shown anything related to what you have waiting to be added, and I have now extended that to say 'and they now stay in view after you add'. Sometimes it was hard already to find the entries directly related to the entered tag. In "edit subscription query", the "tag import options" button name does not show if the options were modified.
The derpibooru downloader downloads descriptions without the links in them. :(
>>15898 And now it's fucked again. Just took 30+ seconds to load 2 pairs.
>>15902 I had some problems too searching for tags in the tag parents dialog for the PTR, without activated 'show all tags' checkbox. just the typing alone was slow, like a delay of 300ms or so between each character. activating the 'show all chains' checkbox kinda freezed it with the typical 'hydrus clieant 585 (no response' window that comes when stuff gets slow or crashes in Windows. at that time i didn't restart the pc or the client for some time, could have something to do with that, or not. after restarting the client it got fast again. other than this i didn't have slowdowns or crashes because of the sibling/parent dialogs, but i also didnt use them. I'll check from time to time how it behaves after having the client open for some time.
>>15903 > just the typing alone was slow, like a delay of 300ms or so between each character My search in the parent management window is instant so long as it's not loading pairs. If it is, I get the same delay as you. Are you sure it wasn't in the middle of loading pairs for a tag you had already entered?
>>15904 Im not sure how you mean that. It doesn't load tags for me really when i enter tags since now it is supposed to be fast. So there is no loading really. Except the 'show all tags' checkbox which crashed it once, but the slowdown was without it activated. Maybe the 'show whole chains' checkbox was activated tho. But i don't think it was during 'loading'. Can you give a little tutorial how to reproduce it what you say? What checkboxes and what tags if you don't mind?
Now I'm getting even more weird behavior. It said "loading", but unlike usual during pair loading, it allowed me to enter a new pair before it finished loading. Normally the button just doesn't work until the pairs are loaded. >>15905 I haven't touched any of the checkboxes, so this is all happening as I normally use the program. What I am talking about being slow is the loading of pairs, which is indicated at the top by the "wipe workspace" button. Hydrus is having issues loading the pairs related to tags that have been entered, not just typed, into the set children/siblings/parents boxes. During this delay, typing in tags and the tag search autocomplete in this window are slowed for me.
>>15906 Also, upon clearing that popup, it loaded the pairs, but simultaneously cleared the already entered tags so I had to enter them again.
>>15906 I see you have no PTR, so you can't give me a tag to reproduce a long loading. To search a tag with many tags, my mind came up with entering 'series:pokΓ©mon'. It has ca. 600 pairs, and with the 'show whole chains' checkbox activated it has ca. 16000 pairs. And both are almost instant (max half a second), that's why i can't really type in stuff while it's 'loading'. That's what the update is supposed to do. But the slowdown i had was weird behaviour that didn't occur yet after restarting the client, so let's see how that manifests in the future. I can't help you really. do you use an HDD or SSD? Windows or Linux? Hydev will most probably give you the answers to whats the problem on your side, today or tomorrow i think.
>>15908 SSD, Wangblows 10.
I just noticed some files have this (1) (+1) tag instead of only (1) like in pic related. This is a literally a non-issue, and it seems like it appeared on files pending deletion after a duplicate filter. What does it mean, though? Tags that were merged?
>>15841 >>15843 >>15845 >>15846 >>and more Yeah, sorry, I don't mean some clever talking Cortana is going to tag your images via neural splice SINGULARITY NOW BROS, I just mean stable-diffusion-like models are getting better and better and converting image-to-text and vice versa, and we are seeing it work already with the hydrus Client API and the danbooru/e621 models. All indications are that this tech will improve further for some years, and modern GPUs are probably going to have even more hardware acceleration and stuff for all this, so I expect hydrus to use it more too, and I would like to have more ways for the manage tags dialog to provide suggestions and perhaps have the client call external programs to auto-generate some tags on import etc... Then, if we can recognise regular nouns like 'skirt' or 'blonde hair' in an arbitrary image, be that anime only or real life too, then the PTR wouldn't really have to keep sharing those words so much. The same may be true for some characters or series names, but that is less generalisable and I imagine more prone to false negatives and positives. Any future hydrus plugins to this sort of tech will need careful filters to ensure we don't try to apply 'title' tags and other specific stuff to our model training or suggestions. Also, in terms of precision, we don't need 99.9%. If these programs can offer, let's say, 90% true positive, and if they offer an "I am 85% sure of this tag" metadata so we can filter out unreliable suggestions, then we are getting a huge productivity boost. If 90% of the 'skirt' tags you would add appear without you having to do anything, then that's a ten times multiplier on your human time, minus the problems caused by false positives. The only question, then, is where the threshold should be to keep those false positives low enough to be a worthwhile pain. Since these models appear to be getting better every few months, we know the ratios are only ever-more in our favour. There's also some more esoteric ideas like automatic archive/delete filtering (or some new pre-filtering workflow or filtering context layer like 'I think you will like this' we invent to prep your human work). This is stuff we can think about in some years if models become trivial to train and run. We'll see how it all shakes out. >>15849 >>15851 Thanks, something here seems off. I'll check what it is doing in this case; I think it might only be doing petition, and not rescind pend. >>15867 Thank you, I will check this out! >>15857 >>15860 >>15861 >>15873 >>15875 I am afraid I not at all a Linux expert, and when it comes to anything unusual like FreeBSD I really no next to nothing. If you have some explicit error messages, I can look at them, but the answer is probably going to be 'try running from source', and then, if that fails, it'll be looking through StackExchange for posts about getting OpenCV through pip on FreeBSD or whatever the exact problem is. You will be much more adept than I am at that. If you do learn anything, please do send it in. You can email me if you like, or just post here, perhaps with a pastebin if it is too long. A couple of anons have written whole guides that I have attached to the help here and there, in a similar way.
>>15882 >If hydros downloaders could use it like this, it would make synching tags with boorus much faster and maybe less strenuous for the site. Unfortunately, almost all the big booru engines supply their tags in an 'unnamespaced' way in their APIs. I guess internally, boorus store 'samus_aran' as 'hey this is a character tag', as per here https://donmai.moe/wiki_pages/samus_aran , where it is inherently green, and it will be grouped with other green tags in the normal html post view, but that green namespace is not generally explicitly spoken when the tag is referred to technically in URLs or the API. Hydrus could have tag siblings for every booru 'samus_aran' to 'character:samus aran', and perhaps we will in future import all this data to get a nice mapping, but we don't right now, so in order to get nice character, series, creator tags, we need to grab the html. If the APIs started separating the tags based on their 'artist-tag', 'character-tag', and similar, then we would be able to do this, but I think this is just an unfortunate difference of hydrus and booru design--I declare namespace explicitly; they do not. >>15883 Unfortunately the stock answer on this is 'I got mpv to work through duct tape and spit, so if you have anything unusual as OS or Window Manager, I can't guarantee anything'. My media canvas code is also pretty shoddy in the way it lays some things out, which is probably the cause of the resize/position flicker you see. On OSes happier with Qt, that stuff tends to get folded into one frame, so you never see it, but if the OS forces a repaint on every update or whatever, you then get the flicker. I don't have a nice answer for you right now, especially since you are running from source already, but I am planning to add some DEBUG checkboxes for mpv in the nearish future that will change how I load and swap out videos. Mostly I will be going in the direction of 'use a new mpv video for every video', rather than the current recycling tech that I do for stability purposes, which may not help your situation much, but we'll see. I'll also be working more on cleaner layout code in my media canvas, which I hope will improve you. You might like to try using a slightly older or newer Qt version. If you rebuild your venv and select the (a)dvanced setup, you'll have several Qt versions to choose from. I expect your flicker behaviour will change, so perhaps one is better than another? If you do find a good one, please let me know and I'll update the help etc.. There may also be a magic environment variable you can run, like QT_QPA_PLATFORM=xcb, that will launch hydrus under a different Window Manager or whatever, but I'm afraid that sort of stuff is beyond my expertise so I can't talk too much about it. >>15889 >>15892 >>15896 >>15897 >>15898 >>15902 >>15903 >>15904 >>15905 Thanks. I am glad it is working in simple cases, but I was afraid we'd run into some of this. I encountered a couple of the random slowdown events when I was doing final testing on the PTR scale. For a couple of years now I've had odd reports from one or two users that doing x or y sibling/parent fetch was inexplicably taking 20 seconds or more on the PTR. I've never been able to figure it out, and some of those users had it because of hard drive problems, but there is some search index weirdness at times. The underlying database structure isn't too big (couple hundred thousand rows at most), but it can get complicated, and it looks like my new fast-search system for siblings and parents dialogs is running into the same problem, and in that annoying 'sometimes it just takes ages what the hell' way. If you get this, please try doing a few queries with help->debug->profiling on, and send me the profile. I will be doing this myself and we'll see what the hell is actually going on here. Selecting 100 rows out of 200,000 should be like 50ms at most, but if it is doing it backwards and reading 200,000 100 times, that could really add up. Sorry for the trouble, I'll keep pushing!
I got a about a 25-30 second delay here when trying to load pairs for a tag with only 7 pairs.
>>15914 Also, it was the last thing I did before closing the parent management window and then closing Hydrus, if that helps you find it in the log.
>>15890 Thank you for these reports. I will see why the scanbar is not getting the right redraw calls when it unhides. It turning cyan is intended, but it is odd--that's my native viewer, which animated webp uses (mpv doesn't support animated webp yet), and the teal area is a little visual indicator of the frame buffer my internal renderer has pre-drawn. My native renderer is a weird debug thing, I am pretty embarassed about it, hahaha, and it has a few weird old quirks like this. I expect I'll steamroll over it one day with an overhaul that brings it all up to newer standards and more Qt-friendly code. (I wrote all this shit back in wx, and it is secretly a really ugly software renderer that is eating bmps piped over from ffmpeg and throwing them on screen). And yeah, thanks for the note about the bad 'noneable' controls. Some of this stuff I can't fix nicely (with a memory of what you had set before) since I store the same 'number or none' in the same options cell, but I can at least make all those controls default to a nicer number than the '1' you are seeing. I just need to go through them, or programatically figure out a nicer initialisation/setting routine. Another requested thing would be 'reset this page to defaults', which I agree I'd like but it'll take some thought. I'll work on it. >>15891 Not yet. I know exactly what you are talking about though. Unfortunately my downloader doesn't have quite the tech to support this, and there will be an awkward problem to get around in how subscriptions target a downloader like this, but I hope to have some options around this in a future iteration of the downloader engine. I am sorry to say it make have to wait for a larger overhaul of the whole system, since the current behaviour is pretty core to the whole thing. We'll see if I can tuck this sort of thing into a related 'retry later' tech I want to add to handle some error states. >>15900 >In "edit subscription query", the "tag import options" button name does not show if the options were modified. Thanks, I will check it out. >>15901 I will check it out. I don't know how this thing works, but if it is just pulling the visible 'text' of the html, and the URL you want is in <a href="xxx">, it may be tricky to get that in a neat way. The hydrus note tech is only plaintext for now, so no proper rich text or links or anything yet. >>15906 Thank you, it looks like I missed something in the pair-loading-queue logic too, causing that error popup. I'll look into it, and sorry again for the trouble. >>15911 If you go into 'manage tags', I suspect you'll see those tags in one service, probably your local 'my tags', and also pending to the PTR. That taglist you are looking at is probably in 'all known tags', which merges all services and can cause some odd count summaries like that when two services agree or disagree on a tag. When you commit the tags, the (1) (+1) should merge to just (1). If you don't have the tag in multiple services, let me know, because that could be a miscount. >>15914 >>15915 Perfect, thank you. I see the slow parts, and will investigate this this week. "2024-08-10 15:32:18: Profiling db job: read tag_parents" if you want to see yourself. This shit is supposed to take a few milliseconds, and it reliably is instant on a smaller test service, but there's a couple of 13 second delays for you in one method. I will be putting time into this this week and am determined, if I can, to get it working correctly.
(5.78 MB 400x224 cum zone.mp4)

>>15916 >"2024-08-10 15:32:18: Profiling db job: read tag_parents" if you want to see yourself. <Take a look <cumtime is a stat Lmao.
>>15916 >tag in multiple services >pending to the PTR You're correct in both accounts! By clearing the "pending tags", that (+1) disappeared. Thank you~
I had added to 4chan parser a "downloadable/pursuable url" content parser that tries to get urls from the comment (text of the post), and just found that a picture gets the urls mentioned in any post it was attached to assigned to it as urls. Aren't those urls for different files Hydrus must download?
>>15919 And I would like those urls not to get the *tags* the picture is supposed to get.
>>15908 (me) >>15914 I also found a very reliable way to reproduce the 'loading...' right next to the 'wipe workspace' button. I ask all you guys to try it too (at your own risk, don't forget backup) and mention here what happens on your end. So you do the following: 1. open 'tags' -> 'manage tags parents...' 2. PTR tab (if you don't use the PTR then your out of luck here or lucky, whichever you prefer :D 3. no need to activate any checkbox 4. type 'dragon ball' and chose (it is the first option so actually no need to 'chose') the namespaced 'series:dragon ball (ca 216.000)' , enter, it should give 74 pairs at the time of writing this comment 5. type 'dragon ball and chose (the second option for me) the UNnamespaced 'dragon ball (ca 8.400) -> series: dragon ball', enter it will start 'loading...' for over a minute at least and while it does that the client will be unresponsive or very slow. it is important you have to enter the tags from 4. and 5. in exactly that order, otherwise 5. will give 0 pairs and 4. will give you 74 pairs without loading/crashing/slowdown. if for some reason it works fast for you, close the 'manage tag parents' dialog and open it again. with two tries it is guaranteed for me. with only one try after starting the client it works almost guaranteed (i remember it worked at least once but i am all over the place now, perhaps it is even guaranteed on first try) Good luck fixing this Hydev.
What is stored in Hydrus's backup file? I want to move my installation but I also want to do a "fresh install" and then get my backup. I'm just worried about losing something
Any way to set default tag sorting, grouping types? It's annoying having to set it every time I open an image.
>>15922 https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up Read the 'backing up' chapter and 'clean install' bit higher up, which you probably want. Also, im not sure what you mean with Hyrus backup file. There is not one file where everything is backed up within. You have to backup your whole Hydrus folder or at least the db (database) folder, which is the most important for your media and tags. You want to do a clean install with your database and files saved but settings etc put to default? Then you have to backup the db folder and do a new/clean install and put it back in. Read the link to be sure. >>15924 file -> options -> sort/collect on the left -> read the options here, but what you want is probably the third from top 'default tag sorting in the media viewer' I would advise anyone new to Hyrus to read the whole website (or at least the 'getting started' part) and go through all the 'file -> options' at least once. will take some hours but you will find out alot of helpful stuff.
I right-clicked a tag, opened the parents window for it, entered two tags and click "add". > Hey, somehow the "Enter some Pairs" routine was called before the related underlying pairs' groups were loaded. This should not happen! Please tell hydev about this.
Can you make so that 'dateparser' would stop respecting locale when converting timestamps automatically? Like a toggle in "edit conversion" window or in settings to set it globally? I have situation like this "7/18/2023 8:32:00AM" In my locale, for some reason client freaks out and says 'nope, can't do it with this date' But, if i would boot up client (using setup from source) with 'en_US' locale enforced with setlocale function in client boot script, it will happily chew it and spits out correct timestamp. I had to remove last 2 characters just to make my homebrew parser thing to work, and i don't want to do that setlocale ugly hack.
is there a way to retrieve the hash(es?) for a file my client knows about but doesn't have?
>>15912 Actually, the autotagger I use on my downloaded booru files works pretty good! I'd give it 97% correct, as theirs a few exotic items like "spreader-bar" that it occasionally makes mistakes on. But everything else is fine. It even tags things like "hair bow". It seems to be less reliable with 3D / realistic stuff cause it's been trained on anime drawings. So, if your tagging your Playboy collection, it might mess up on a few rarer tags. But it's still pretty good, and I use it on all my collection.
I think it'd be cool if there was a way to zoom in on the file history graph. it's a bit hard to see some of the differences between the lines when the entire history of your database if always visible. Just seeing the last year would be helpful.
>>15928 I know a way kinda yes, but with the exception of files that have no deletion record AND never had any tags. From those you can't get any hash afaik. For the others the SHA256 (i think) only. First you have to activate the advanced mode: help -> advanced mode (so the checkmark is there). Now you can take a look at other file locations where the 'my files' button in the search pane is. You need to check two file locations now, when pressing on the 'my files' button. 1. Change file location to 'deleted from anywhere'*. Now you see files that you deleted from any location, but they contain also ones deleted from a location but still in other places and still in your client. I think you only want the files that are not in your client anymore which are the blurry thumbnails with the red thumbnail background. Now use the system:fileservice seach predicate. Search for 'is NOT currently in all local files'. This will give you only deleted files that arent in your hydrus anymore -> the red thumbnails. Now you can select all those, right click -> open -> in a new page. This will send them to a new page with the search predicate 'system:hash is X' visible in the search box on the left. Shift+double left click on the 'system:hash is X' will open the edit window where you can copy all the hashes nicely. *This file location will give you files that you have deleted WITH a deletion record saved. Also those without a tag, which is not the case for 2. (see below) 2. Change file location to 'all known files with tags'**. To make only the ones visible that aren't in your client anymore, use the system:fileservice predicate again and do the same as above. You can just open the page from 1. again and change the file location to 'all known files with tags' there. The correct search is already typed in. Then you should have only red thumbnails with tags that you send to a new page too and there you can check and copy the hashes like described in 1. **This file location will give you files that you deleted WITH or WITHOUT a deletion record saved, but at least had/have one tag. Once you delete all the tags from a file from there, you will not find the file anymore (at least within the client) when you deleted the deletion record also. If you saved the deletion record, it is still in 1. ____ Note that theoretically you could only need one of those two file locations, but depending on what you did (saved deletion record or not, tagged files or not) you need both probably. This way you can also retrieve hashes from the PTR, when chosing the 'all known files with tags' file location + PTR tag domain. Search for files, select, send them to new page and get the hashes from the 'system:hash is X' in the search box, exactly as described above. Not sure if there is a better way for all of what i described.
Dev, a few years back, I asked you if Hydrus Network could remove all traces of a file ever being in the database (hashes, URLs, etc.). You said that it currently couldn't, but it was something you planned on eventually. I haven't kept up with development, so did this ever happen?
edit parser - example urls - edit - Esc without editing - cancel It asks if I want to save the changes.
I'm trying to download stuff from reddit, but I'm only able to download images - videos returns an error. What should I do?
>>15933 https://8chan.moe/t/res/14270.html#15536 Not yet, but still planned. Read the first answer from Hydev and the corresponding post he is answering to.
>>15913 >You might like to try using a slightly older or newer Qt version. If you rebuild your venv and select the (a)dvanced setup, you'll have several Qt versions to choose from. I expect your flicker behaviour will change, so perhaps one is better than another? If you do find a good one, please let me know and I'll update the help Thanks, but it seems like different Qt versions don't affect the flicker for me. I have Python 3.11.2 and tried rebuilding with the different available Qt 6.4.x and Qt 6.5.x versions, but there was no change. Tried the environmental variable too. I remembered I had no problems on Windows, so I did some investigation. On my Windows machine (i7-2600, 8GB RAM, no graphics card) I was running Hydrus v578. I wondered if it was the version so I tried v584 on Windows and noticed the same flicker I get on Debian: with mpv, after viewing any video in the media viewer, any subsequent zooming has a split-second resizing/repositioning flicker until closing and reopening the media viewer. The difference is on Windows I don't get any flicker while navigating between items with the media viewer, only when zooming. After trying more builds, it looks like v579 was the first version on Windows where I get the zoom flicker. So I tried v578 on Debian building from source, and... both flickers are still there, no change. Oh well. It's not a huge problem for me. The browsing flicker isn't that bad. The zoom flicker is bad but I don't zoom that often and I can always just close and reopen the media viewer to remove the flicker anyway. I just started using Linux a few months ago so there's probably a lot more I could try on my end that I have no idea about.
I had a great week. I fixed some bugs, finished some advanced multiple local file service features for the Client API, and got siblings and parents loading faster, particularly for the new dialogs. The release should be as normal tomorrow.
Is there any way to find related files by tags? Maybe by searching for files with some of the tags suggested as related, but including the less popular tags, too. I imagine it could take a long time, but not as long as the siblings window was taking.
Any tips on increasing the performance? I get freezes for 5-10 seconds every few minutes. My DB size is 98 GB, media files are 210 GB. All on fairly fast NVMe SSD. Got 64 GB RAM and 7800x3D. Shutdown maintenance was recently performed. Picrelated are my current speed/memory settings.
https://www.youtube.com/watch?v=CVAaLlOUD00 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v586a/Hydrus.Network.586a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v586a/Hydrus.Network.586a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v586a/Hydrus.Network.586a.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v586a/Hydrus.Network.586a.-.Linux.-.Executable.tar.zst Hey, I did a hotfix to fix a stupid bug when moving from videos to images. If you got the release within twenty minutes of this post going live, get the updated v586a above! I had a great week getting siblings and parents lookups running faster and finishing some long-planned Client API work. The update may take a minute this week! Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I am happy with the new siblings and parents dialogs, but unfortunately the fetch jobs were frequently running super slow on the PTR. This has been a long time problem in other sibling/parent places, and I could never really figure it out. This exposed the problem better and I simply put a bunch of time into the database sibling/parent storage structure and search code this week, and I think/hope I have fixed the worst of it. I also fixed the crazy-long lag spikes we were seeing, which was, unfortunately, just me being stupid last week. If you sync with the PTR (or not!) and have had slow sibling/parent lookups (including in places like the tag autocomplete results list), let me know how it goes! If you have the media scanbar set to hide completely when the mouse is not over it, I think I fixed the issue where it would come up blank if the media was paused while the scanbar was hidden! The options widgets that are an editable number with a checkbox beside saying 'no limit' are now initialised with a nicer default number when they start with 'no limit' checked. Previously, this stuff was all initialising to 1 every time, which wasn't always helpful if you actually wanted to go edit it. I'm pretty sure all the 'noneable' integer widgets in the options dialog now soft-initialise to the actual defaults those numbers are on a fresh client. If you use multiple local file services, then when you middle-click a tag in the media viewer, the new search page now correctly retains the original file domain of the media viewer. Although I use multiple local file services myself on my IRL client, I do not browse around in them all that much, so let me know where else this sort of stuff defaults to 'all local files' or 'all my files'. I have removed a hard limit that said 'don't run an import folder for more than an hour'. If you have a mega import folder with hundreds of thousands of files, let's see how it goes. If you have done Client API or tag migration work and ended up with some bizarre tags that are both pending (+1) and petitioned (-1) to the same service, check the changelog this week! client api Thanks to a user, the Client API call that renders images can now output in jpeg and webp, can change the quality of the output, and will render to a target resolution! I also think I finally finished off the first full version of 'multiple local file services' Client API support. You can now set a custom import destination for the 'add file' and 'add URL' commands, and you can now copy files from one local file service to another using a new 'migrate_files' call. next week I've been working on several things recently that can populate a multi-column list with a hundred thousand or more rows, and it has reminded me that my core list code relies on an old hack in it that makes initialising and sorting such big lists super laggy. I have researched how to improve it and hope to do so next week! Unfortunately, I do feel myself going down with something, so it might be delayed.
Edited last time by hydrus_dev on 08/15/2024 (Thu) 00:28:44.
>>15938 Thanks, king. Some people only post their stuff there, so cut me some slack. >>15938
>>15935 >>15938 I haven't tried it in a while, but when I last used it, yt-dlp also worked well for reddit vids
>>15945 oh wait, that's because the flathub application is a frontend for yt-dlp. makes sense
>>15945 >>15946 Oh shoot, it DOES work too! Nice, yt-dlp is so versatile.
(36.92 KB 680x454 told you so.jpg)

>>15947 > it DOES work too!
>>15946 >the flathub application Incorrect, FlatHub is a website hosting programs in "Flatpack" format, like the "Store" in Mac. Flatpack allows to ship all software dependencies in one package that in theory are able to run in ALL Linux distros. Kinda AppImage files but centralized in one website.
1a. Is it possible to install several Hydrus clients with the .exe at the same time and run them simultaneously? Like install to folders A, B and C (all on C:)? I don't wanna try it and risk something get overwritten, but maybe it should be possible since Hydrus is completely portable? I have a main .exe install and just tried it with extracting the ExtractOnly.zip version to the Download folder, which works and runs at the same time. Im fine with them being in compeletely different folders and each with their own database. But not sure about several .exes, and if there would be several start menu folders etc. 1b. Can the .exe and the .zip versions update each other? Means if i have an .exe install, can i extract the .zip over it and vice versa? For example if i have the .zip version on an external hdd/sdd (E:) and i chose to move it to my main disc on C:, can i then update this version with the .exe and if so, would it have any implications for the update process? For instance, a 'clean install' was necessary for some .zip versions in the past. If i would start updating the .exe over the .zip , is that already a clean install like the guide is suggesting for the .exe version? 2. What is the best way to mass rename a tag. I assume right now the way is to search for the tag, then select all files, then 'manage tags' and add the new tag and delete the old?
(560.03 KB 247x482 yes.gif)

>>15954 >I assume right now the way is to search for the tag, then select all files, then 'manage tags' and add the new tag and delete the old? Yup. That's the way.
>>15953 it's not incorrect. it's hosted on flathub, so it's a flathub application. I know what flatpak (not flatpack) is
>>15954 >mass rename a tag the majority of the time, it's better to make a new sibling. in fact I can't really think of any situation where I'd specifically want a one-time "rename" unless the tag is actually just wrong outright
>>15954 Not sure if you can have multiple installs using the installer, but for extract only you can. Also you can use multiple separate databases with a single install. Just make a new shortcut and add -d="path to different db folder" at the end of the target path.
is it possible to make is so that files that are downloaded and have a known url of a certain type get a specific tag added to them? I feel like there's a way to do this but I can't think of it.
>>15960 Network > downloaders > manage default import options... then you double click a url class and add an 'additional tag' to a tag service you need. Is that what you want?
>>15961 kind of, except that I need that for a file url class, not a post url class. Since those are missing, I'm guessing this isn't possible.
>>15958 underscored tags.
>>15963 that would definitely be a case where you should just make a sibling relationship
>>15963 >>15954 Definitely what >>15964 says. This is precisely what siblings are for. If you set a sibling like "some_random_tag" --> "some random tag" then you will never have to correct underscores for that tag again. Mass deleting "some_random_tag" and adding "some random tag" like you describe would have to be done periodically to keep things tidy. With siblings it's one and done forever.
>>15943 This seems to have fixed those lag spikes for my local sibling tags. Great work.
>>15965 >Mass deleting "some_random_tag" and adding "some random tag" like you describe would have to be done periodically to keep things tidy. With siblings it's one and done forever. IMHO, keeping things tidy is the proper way to do it. Siblings are a Mickey Mouse way for hiding wrong, not wanted, and deprecated tags.
>>15968 >Siblings are a Mickey Mouse way for hiding wrong, not wanted, and deprecated tags. If you don't use siblings for that, then what are siblings for?
>>15965 yes, but then you will have both underscores AND non-underscores, easily doubling the amount of tags you have underscores are a very simple case for automatically and permanently converting a set of tags based on sibling relationships, there's no reason not to if you can
>>15969 Well, when I begun using them was for what I really thought were for, to center (ping point) on a tag that encompass a general idea. For example: - ideal tag: npc - replacing tags: homo, libtard, covidian, sheeple, retard, low iq, commie, brainwashed, idiot, collaborator, automaton, parasite, degenerate, golem, adversarial, ... But soon I found out that all those secondaries tags were out of my sight and I wanted them back for a more precise search. So, I dropped Siblings usage for good to never touch it again.
Pasting a tag generated from 'system:file service' doesn't seem to work. For example I paste in 'system:is currently in my files' and hit enter, but it does nothing (the tag won't be added). Not sure if the functionality was added, but I remember you adding support for a lot of system predicated before, so maybe it stopped working.
>>15969 Having searchable variants. Supposedly the correct danbooru tag for tutorials is 'how to', so adding a sibling 'tutorial' to that makes me able to just search 'tutorial' if I ever forget that the proper tag is 'how to'. Anyway >>15954 I think the cleanest way of doing it is editing the database itself. You simply replace all underscores with spaces in the table that has all the tags. That way you won't create duplicate entries (the tags with underscores would still stay in the db even if no file has them) and you don't need to create siblings. Though if you regularly pull bad tags with underscores from somewhere, then siblings would probably be the better option. If it's just a one time user error, then you could try the db approach. >>15975 That's what parents are for.
>>15975 yeah, that's what parents are for. a tag implies a parent VERSUS a sibling replaces a tag parents are for every instance of X automatically also tagging with Y, whereas siblings assume you meant Z when you tagged X
>>15975 Use parents, npc. :^)
>>15943 After I change parents, a db lock delays things in the parents window. The tag on the left takes time to appear, the add button takes time to activate, and autocompletion takes time to start. I am not sure it saves any time.
>>15981 I had that issue at first on v585 as well with both parents and siblings. Some heavy database magic basically freezes everything for a few minutes. After 2-3 times of that though it finished whatever it needed doing and both the parent and sibling dialogues are basically instant now. If you're running a build older than v585 then it will unfortunately always be like that. Parents and siblings were massively optimized in v585.
How do I force Hydrus to reimport files from a booru? I've recently split out a new downloader tag service for a specific site and it won't import the tags into that specific service because it detects those files as already in DB.
>>15983 Immediate update: Reimporting new tags works with Gallery, but not URL Import. Inconsistent behavior, URL Import should also check for new tags, or at least be configurable to do so (unless it is and I'm missing it)
>>15983 A manual way to do it would be to export the files with tags as a sidecar. Then re-import them with the sidecar tags to be sent to your new tag service. It will say everything is already in the database, but it should still add the tags to the service.There might be another way to do it, but that will work as long as you're okay with the tags being in both the old and new services. I do sometimes wish there was a right-click option in the file log to force re-importing.
>>15983 Migrate the tags you need from service to service using the tag migration tool in 'tags > migrate tags...' or you can find it in the tag manager under the cog icon (this works on the files you're editing). If you really need to redownload, then go to 'network > downloaders > manage default import options...', open the url class of a website you want to download from, flip the drop down to custom and check the two force fetch checkboxes. Also check the tag service checkbox you want to import your tags to and uncheck any you don't want.
v586, win32, source AttributeError 'NoneType' object has no attribute 'GetAnimationBarStatus' Traceback (most recent call last): File "D:\hydrus\hydrus\hydrus\client\gui\ClientGUI.py", line 8293, in REPEATINGUIUpdate window.TIMERUIUpdate() File "D:\hydrus\hydrus\hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 2652, in TIMERUIUpdate self._animation_bar.setGubbinsVisible( False ) File "D:\hydrus\hydrus\hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 1230, in setGubbinsVisible self._DoAnimationStatusUpdate() File "D:\hydrus\hydrus\hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 911, in _DoAnimationStatusUpdate current_animation_bar_status = self._media_window.GetAnimationBarStatus() AttributeError: 'NoneType' object has no attribute 'GetAnimationBarStatus'
The 'hydev is stupid' hits keep coming this week as it seems I broke tags->manage tag display and search due to another dumb typo. It is fixed for source users now and everyone in v587, sorry for the trouble! >>15917 Btw this turned out in large part to be a mis-profile. Although the db search was running slow in certain situations, the big 30-600 second delays we were seeing here were due to a super stupid busy-wait I accidentally had in the asynchronous caller, and since the tech by which that tech worked was a low-level C++ thing (afaik), it was holding up the CPU in a way the profile wasn't catching and recognising was actually work occurring in another thread. This was probably combined with the thing actually being profiled was itself also out of the python GIL and in SQLite's C++ dll land itself. So, it wasn't so much a slow db search as much as a choked CPU core. Running this through PyCharm's debugger actually froze both the IDE and python's processes in a way that it was impossible to kill them via Task Manager or command line, and then it froze the whole computer! An interesting lesson learned--don't be casually foolish with thread.Event. >>15919 >>15920 Interesting. 'downloadable/pursuable url' generally means 'go get this thing', but I guess if the import object already has a URL, it becomes 'associate these things', like a source url. I expect this is an accidental inheritance. I will investigate the logic here, but your situation is a little unusual, so I cannot promise a clean solution right now. It might be that even after I fix it the child object still gets the tags and other soft metadata from the parent import object that spawned it, so my ultimate advice might also be that you need to make a whole separate subsidiary page parser or something here so the import objects that are getting the tags are in a whole separate thing to the comment-URLs you are parsing. As a general disclaimer, as I am sure you know now: my downloader is not great at pulling URLs from, like, kemono post comments. It was designed for boorus, so clever stuff like this will trip it up. I'll see what I can do though! >>15921 Thank you! Should be fixed now, let me know if it gives any more trouble. >>15922 It is just a copy of your four database files and the client_files structure. Nothing too clever. Check the backup help as >>15925 says. As long as you have those db files and your client_files stored somewhere, you can recover. If you are moving your install, check this one too: https://hydrusnetwork.github.io/hydrus/database_migration.html Have a poke around the 'db' folder and see which part is which. You can break anything if you just have a look, and let me know if you have any questions about anything else.
>>15926 Thank you for this report. I think I have this fixed in v586 now, but there may still be a hole in the logic somewhere. Let me know how you get on! >>15927 Do you happen to have a copy of that error traceback? Should be in your log, if it made a popup in the main gui at any point. Normally dateparser is rock solid, so it is interesting it can't figure this out. dateparser is a super simple library with basically only one method. I just did a test in console here, and it looks like that main parse call will take 'locales = None'. I can add an option for this, and (if you know how to get into your venv of your source install) can you check that it works for you? - open a terminal to your install dir - source venv/bin/activate (or "CALL venv\Scripts\activate.bat" in Windows cmd) to activate the venv - type 'python' and hit enter to open the python terminal, then do: - import dateparser - dateparser.parse( '7/18/2023 8:32:00AM', locales = None ) - exit() Does that work ok for you? >>15931 Yeah I completely agree. I keep thinking I should add a date range and stuff to it. It is totally possible, I just need to get around to it. >>15932 Perfect answer. >>15933 >>15936 I am closer to this point, and I was recently able to add some new tag-scanning tech to the client database and things did not explode, so I am feeling more confident about finally flipping the lever on this 'scan the whole database for no-longer-used master records' system. Although I've felt pretty bad about my work, this has been a good year for cleaning bad old code and old systems. We are getting there, but there is still more to do. >>15934 Thanks, I am sorry, I know how jarring/annoying this is. I don't know for sure what is causing it but I will make sure I drill down and figure it out. I have some ideas. >>15937 Thanks for the update, and sorry for the frustration. Although Qt can be tempestuous with its updates, it has been getting much better on this stuff in recent years. I'll clean things on my end too; please keep me updated on how things go.
>>15989 >I am closer to this point, and I was recently able to add some new tag-scanning tech to the client database and things did not explode, so I am feeling more confident about finally flipping the lever on this 'scan the whole database for no-longer-used master records' system. This is good to hear. Thanks for all your work, Dev.
>>15940 I quite like this idea, and I've thought of some sort of 'soft/fuzzy' search for a while, but I don't know a huge amount about how those sorts of recommendation algorithms work. The 'related' tag suggestions system in 'manage tags' is neat (turn it on in options->tag suggestions if you don't see it), so I wonder if we could do something similar. Like if I made a catch-all tag predicate that held several tags, a bit like an OR tag, but it said 'weight all these tags according to namespace and then find any files that match any of this taglist with total search-match weight > x', that might do the job here. >>15941 Sorry for the trouble--that is not normal! Please hit up help->debug->profiling and either pastebin the result here in the thread or send the profile to me me via email or whatever. You might also like to try pausing tags->sibling/parent sync and database->file maintenance and database->db maintenance->(deferred stuff) to see if things suddenly pop back to nice. Try pausing the work in 'normal time' first. That options page is good for improving the speed of the media viewer, and the general rule is you can push the numbers up a bit to improve performance it you are looking at giganto pngs and such. If you are getting slowdown doing other stuff though, it is probably my background maintenance code being super rude on your PTR store or similar. >>15954 >>15959 Yeah, check these two pages: https://hydrusnetwork.github.io/hydrus/database_migration.html https://hydrusnetwork.github.io/hydrus/launch_arguments.html#-d_db_dir_--db_dir_db_dir You can run multiple databases off the same install, be that the exe installer or the zip. You just launch the exe with that launch arg pointed to a new place and you are good. Check help->about once you launch to make sure all the paths are as expected. You can have separate installs/extracts pointing to different locations if you need to, but it'll just add complexity unless you need to run different versions at once for whatever clever dev reason. Only thing you must not do is run one database with multiple installs at the same time! This shouldn't work on a local machine, but if you fuck around and set a db_dir that is on a network location (i.e. on another computer), then my 'hey this database is already in use mate' checks will not work and you'll run into Database Locking ClownTown, doubly so because you are over network I/O. >>15962 Yeah I think this is not possible yet. File URLs aren't really clever enough to talk to the tag import system, but I'll make a note to look at this sometime. If you are doing this via a subscription or a permanent watcher page or something, I recommend just setting some tag import options with forced 'additional tags' there. If you are really hard up for a solution, you could make a search for [ 'system:url: has url (file url class)', '-your_tag' ], and then load that up once a month and go ctrl+a->F3->add 'your_tag'. >>15976 Ah, thanks, I am not sure if I ever got to those when I was doing this work. I know I did not get 100% coverage, and particularly on awkward more human/english predicate texts. I will check it out, and I'll probably relabel all these tags to something more happily parseable like 'system:file service: is currently in my files'.
>>15978 >I think the cleanest way of doing it is editing the database itself. You simply replace all underscores with spaces in the table that has all the tags. Unfortunately, the database is more complicated these days, and you can't do just the one table any more. There are also issues with resolving/meerging conflicts, if you have both 'the_tag' and 'the tag' in the tags master table. Broadly speaking, I strongly do not recommend manually editing the database on an IRL database. If you really want to do this, note that I have 'tags' and 'fts' sub-tables in client.caches.db that replicate the tag text data in the 'subtags' master table, so you would have to edit those too. And the 'local tags cache', now I think of it, and perhaps some other little corner somewhere. If you need to merge tag definitions, then you'd have to update all the autocomplete count caches, which is impossible to do with simple math since the display context applies and merged sibling data and it is simply too complicated to do in a few lines of SQLite or python or whatever. If you really got into this, or were doing it programmatically, I think I'd say, "Yes, you can directly edit the subtags table in client.master.db, but once done you need to run x, y, z database regen routines to let hydrus recalc cache stuff using proper code". It would probably be the mappings cache, tag text search cache, and local tags cache. Easier and better to just use siblings, even if you have to bodge part of the solution with the Client API perhaps. Ultimately, I think I should probably write a 'sibling-replace all underscored tags to their space-having variants' checkbox, and/or make that a 'hard-replace' option. After the success of the PTR janitors' new 'purge tags' system, I'll be bringing that to all users for local services and extending it with our first dedicated 'hard-replace' tech. It should all work on very large lists of tags, PTR-scale operations, so this should become real in the mid-term future and not so difficult for me to add a 'do all underscore shit' mode to it. >>15981 >>15982 This v586 is supposed to eliminate the super long fetch delays we saw in v585. If you are still getting delays in v586, please try using the dialog with help->debug->profiling on, just as we did the week earlier as I worked on v586, and we'll do round two. If you are still on v585, please update--I made some bad decisions in that first draft that are now fixed. >>15987 Hey, I am sorry for the trouble. Did you happen to get the release within about twenty minutes of me making the post? I screwed up one damn line and it slipped through testing. The new links on >>15943 now point to a v586a hotfix that should fix that bug, so just redownload and install and I hope you'll be fixed. Let me know if you still have trouble!
>>15993 >Yeah, check these two pages Thanks for answering, also all other people that answered! Hydev, could you answer 1b too please, would be interesting to know. >>15994 >I strongly do not recommend manually editing the database on an IRL database Editing the databases isn't something i plan to do. I'm scared :o >I'll be bringing that to all users for local services and extending it with our first dedicated 'hard-replace' tech That's cool. That will have tech that can replace namespaces too right? like replacing the namespace 'filename:08951abkoe53872kdfia84' to 'title:08951abkoe53872kdfia84'. this is something you can't really mass rename/replace right now afaik. looking forward to it.
(3.61 MB 720x480 thin4.gif)

>>15978 >>15979 >>15980 >parents Yeah, it looks like the right tool, however, devanon mentioned a few times the issue of circular dependencies and that Hydrus' logic is not so good at it. For example it works well the following, as every children is specific to its parent: - Parent: show:star trek - Children: ---> character:captain kirk ---> device:tricorder ---> species:klingon But what about if things get a bit messy with children with more than one parent and children being also parents in many other categories? Circular dependencies might pop up, then the question would be, can Hydrus manage that, if not, could the DB get damaged because of logic's inconsistency? So, I'm really hesitant of using parents beyond a couple of children deep and I know for sure that my chain of children can go way beyond 10 layers deep.
>>15998 >But what about if things get a bit messy with children with more than one parent and children being also parents in many other categories? Children can have multiple parents just fine. For example 'character:gawr gura' can have 'series:hololive' and 'series:hololive english' as parents, it will simply add both tags when you add gawr gura. Or you could do a chain like 'character:gawr gura' > 'series:hololive english' > 'series:hololive', which is probably better as hololive english should also automatically add hololive, and if you add gawr gura, it will add both. Combining both methods is also not an issue, you'll just have an extra redundant entry. It really depends on how you do it, don't overthink it. >Circular dependencies might pop up, then the question would be, can Hydrus manage that, if not, could the DB get damaged because of logic's inconsistency? You'll get an error when you try to create a parent/child relationship that would cause a loop.
>>15998 You'd have to have some pretty fuckin weird tagging habits to end up with circular dependencies.
>>16000 >>16002 Thanks anons. I'm going to give it a thorough test.
>>15989 >Do you happen to have a copy of that error traceback? Should be in your log, if it made a popup in the main gui at any point oh fuck me, sorry about this, i didn't worded correctly. what im trying to say is that it can't understand that date string on my locale, for example, i put that date thing in 'single example string' textbox then i try process it using string converter, add 'datestring to timestamp (easy)' step, then client freezes, couldn't do it and gives up with "ERROR: Could not apply "datestring to timestamp: automatic" to string "7/18/2023 8:32:00AM": Sorry, could not parse that date!" no error logs on my side >test in console here's the thing, even without locales parameter, it works fine. trying looking through debugger i can see that it loaded english language and parsed it ok, so i tried set 'languages' to have only poland (['pl']), it didn't understand with that and returned nothing, even with locales set to none only after i set languages to have only english (['en']) it worked fine, again even with locales set to none >(if you know how to get into your venv of your source install) can you check that it works for you? i tried to plug it in init file (71 line ithink) in dataparser folder, nothing plugging languages trick in it also nothing i tried plug it in ClientTime file in ParseData function still nothing so i said "fuck it" and plug this "locale.setlocale(locale.LC_ALL, 'en_US')" into it and worked fine. i guess that's struggle when you live in country with 24 hour clocks, i guess if you need some tips on that front because i couldn't debug it further myself, try set your systems locale to be in country with 24 hour clock (like Poland as i demo'd)
need help. my client becomes unresponsive every time I click anything that's not an image. Though, when interacting with just images, everything works fine. I looked in the client logs, but it ends when the error starts, so there's nothing useful there. I generated a profile but I'm not sure how to use it. Btw, when I am in profile mode, everything slows to a crawl; even after turning it off, everything runs painfully slowly, thumbnails fail to load, and nothing shows up when searching (no autocomplete or results; it just says 'no searching done yet'). Funnily enough, the issue is present even with a fresh install of hydrus. I installed from source, and imported a random video to the new client. Just selecting the video made hydrus unresponsive. I can select a bunch of videos with cmd+a and change tags and such, but clicking a single video makes hydrus unresponsive every time. I'm running the latest version of hydrus, with mpv turned off at setup btw. I'm not sure what's causing it or where to start; it didn't happen when updating hydrus, or my system, so I'm stumped. Any help would be appreciated.
>>15982 586a, and it happens again and again. >>15994 > If you are still getting delays in v586, please try using the dialog with help->debug->profiling on, It's 586a. I'll try later. When I tried enabling it, not only did it slow everything down, but there were messages like "another profiling something is already running".
>>16010 >need help. Hardly you are going to get it when your request is flawed. You don't mention OS neither Hydrus version. >I looked in the client logs, but it ends when the error starts, so there's nothing useful there. Produce a screenshot or text for examination. >I installed from source I suspect you fucked the "Options" up. Yup.
When searching I don't have the automatic wildcard search any longer, and it only shows up when I press the "*" key. Any idea how to bring it back as it was before?
(71.96 KB 907x208 clientlogs.png)

>>16014 Sorry. I'm using MacOS, and the latest version of Hydrus. I was using version 578 when the issue arose, but since updated to 586 to see if that would fix anything, which it didn't. I should've mentioned that I've been using hydrus for a few months just fine and it's an issue that came out of seemingly nowhere. I used the recommended options, but I don't even know if Hydrus is the problem since the same issue happened on a fresh install. I'm asking here because I don't really know what to do, and someone else might have an idea.
>>16016 >I used the recommended options I'm not familiar with Mac but most likely you need to tweak those options. Go to Help/About to find out what library versions are installed and then reinstall the venv accordingly.
found a bug with the parents and siblings management windows. shrinking the window horizontally doesn't resize the text boxes and panels where the parents, children and siblings go, so the box on the right will just be cut out of the window entirely. it's supposed to be resized so that they're both visible in the window at all times.
>>16015 I think that behaviour was changed on purpose in version 582. Check out the those: Question https://8chan.moe/t/res/14270.html#q15492 Answer from Hydev https://8chan.moe/t/res/14270.html#q15536 Update notes https://hydrusnetwork.github.io/hydrus/changelog.html#advanced_autocomplete_logic_fixes Guess that's it?
>>16019 Welcome newfriend. There is no need for full links within a board, or even between boards. A regular post reply link will do just fine, and a slightly modified one will work between differing boards. >>15492 >>>/t/15536
>>16020 I asked myself that question when i made the post. But didn't wanna try and fail :P thx Are normal users able to edit posts btw?
>>16021 No, but you can freely delete your own post and remake it. If you want to do so across browser sessions, you can edit and save the auto-filled password field under the More button.
>>16022 >>16023 Nice. Thanks i will take a look at it later.
I had an ok week. I was not able to finish the list rewrite I had planned, but I did clear a mix of small jobs. The release should be as normal tomorrow.
>>16019 Thanks anon.
>>15753 >adding a 'paste-to subscriptions' button A fantastic idea. On the rare occasions where we add a new subscription, this would save a lot of time.
Sometimes I need to find a file by url quickly, and it takes over five steps to do it. Also, I think it would make sense to search by a page url from "manage urls".
>>15721 Is there a way to use hydrus on android device?
I've been getting really long (going on 25 minutes now) waits on adding parents or siblings dialogs since updating to 586a. Gentoo Linux running from source. I've been killing the program because it's frozen for like 5 minutes and my patience wears thin. Nothing in the logs, but tailing the profile does show actions happening. I'll upload the profile when I get tired of waiting or it unfreezes. Looking through the profile I don't see any sensitive info but I don't really get most of it, is there any info I should censor?
>>16029 hydrus.app can do just about everything lolisnatcher can view Hydrus These both require Hydrus running on your deskop, there's currently no way to run the db from Android.
Alright yeah I got tired of waiting, enjoy 45 minutes of profiling.
https://www.youtube.com/watch?v=czrmBIHANV4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v587/Hydrus.Network.587.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v587/Hydrus.Network.587.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v587/Hydrus.Network.587.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v587/Hydrus.Network.587.-.Linux.-.Executable.tar.zst I had an ok week. I didn't have time to finish my big list rewrite, so I'm just rolling out some little jobs today. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I made another stupid typo last week, breaking the tags->manage tag display and search dialog! Fixed now, sorry for the trouble. The top-right media viewer hover now shows all the local file services a file is in. For most users this will just be 'my files', but if you use multiple file services, I hope this will be a bit cleaner than the spammy labels I removed from the top hover the other week. I think I'm ultimately going to make these into buttons or add checkboxes or something so we can have one-click local service migrations in future. I cleaned up the code that handles the two resizable splitters/sashes that separate the normal media page sidebar from the preview canvas and the main thumb grid. There was some ugly stuff in there, and I think I have fixed some odd layout problems certain window managers had. That said, this stuff can be temperamental, so if you are on a weird OS and your pages suddenly lay out crazy, please roll back to your v586 backup and let me know. next week I will finish this list rewrite. I have all the code pretty much done, and I feel good about it, but I need to do a ton of testing and polish. It should let us view huge lists with far less UI lag. >>16033 Thanks, sorry for the trouble. I will check this out next week.
So... Any plans for making clip files viewable?
(175.17 KB 794x1005 twilight - confused.png)

>>16037 >viewable Huh?
>>16039 U kno... So you can see the whole image in the viewer and not just the thumbnail
(331.78 KB 912x659 Screenshot_20240821_212354.png)

>>16040 Double click on it, then use the zoom in and zoom out icons.
>>16041 I mean for .clip files. They are only supported as thumbnails currently. I'd like to be able to see my work within hydrus without having to open clip studio paint, and without having to save everything as a .psd
(219.26 KB 900x805 4895267489526.png)

>>16042 >They are only supported as thumbnails currently There's the answer so. The format is not supported.
>>16043 You have terrible reading comprehension my first post was literally asking if there were plans to support that file type
>>16044 >clip files Not the same as .clip files, fren.
>>16045 What is a clip file?
>>16046 A cut or section of a video file as the result of an editing operation.
>>16020 really? even regular old 4chan will automatically convert a full url into a regular post reply link when you click post. i thought these spinoff boards were supposed to be more feature-rich...
(74.48 KB 1280x720 78456856.jpg)

>>16050 KEK This thread looks more like /b/ everyday.
>>16050 >Removing natural newfag filters is a feature On cuckchan maybe. Here, that sounds like a bug.
(46.35 KB 200x200 columbo.png)

>>16050 >>16021 >unabashed newfaggotry Don't see that very often these days.
>>16055 but how would something like that even work? how would you tell hydrus which "object" you're tagging? and what if you're not tagging an object, but something more abstract, like a genre, or a medium?
>>16056 yeah that's a good point, i don't know what the ux would be like but i think object instances could have ids behind the scenes for disambiguation. let's say a file has two person objects, internally the objects would look something like this. >person: { tags: [red_hair, brown_eyes], id:1 } >person: { tags: [blonde_hair, green_eyes], id:2 } that's not what you'd have to type but basically just the logic of how they'd be represented internally. like right now it's "file has tags", but it'd be cool if it could be "file has tags and/or object(s) with tags". also the ids would be per object per file since they're just a way to distinguish different instances of objects in a file. for genres and mediums, i think those are already covered by namespaces.
>>16055 reposting to clarify it'd be cool if tags could also apply to objects instead of just files. let's say i'm looking for files with any person who has both red hair and green eyes. i can search for the tags red hair and green eyes, but they could apply to different people (file has a person with red hair and brown eyes and a different person with blonde hair and green eyes). unless i'm mistaken i don't think right now there's a way to search for only files containing any person with both red hair and green eyes. this isn't a feature request, more like a feature daydream. i'd use the feature if it existed but i assume it'd be be a lot of work to implement and i'm sure there's a ton of other stuff to work on that's more important/useful. i just think it'd be cool. the idea's probably come up before but i'm new to the threads.
>>16057 >but it'd be cool if it could be "file has tags and/or object(s) with tags" It would be pretty amazing to be able to define subjects this way, that way when you're looking for a single character with combinations of traits, regardless of if they're the only character in an image, you could actually do that instead of how boorus work now where there's essentially a lot of "false positives" or "noise" or whathaveyou that interferes with your search that prevents you from searching for characters with multiple traits effectively unless you limit yourself to images tagged "solo". But this sounds pretty unfeasible for the current structure of hydrus and would introduce so many new logic problems that have to be implemented for it to function with other features of Hydrus. It would be a massive overhaul of the whole system that I can't imagine happening for a very long time, if ever. In mean time, to simulate this, the most I've personally done is create combination tags, usually with the two tags combined being parents, for certain commonly looked for combination traits that would otherwise be hard to find. Things like "fit female" (sex:female + body:muscle), "fit male" (sex:male +body muscle), "futaloli" (sex:futanari + body:loli), "shortstack" (body:large breasts + body:wide hips + body:short) et cetera.
>>16057 >>16058 I see, I get it now. personally what I've been doing is basically what >>16059 does. If there's a case where I want to tag that a specific character has some specific trait, then I essentially tag the file with a "compound" tag that associates some notable property of the character with the trait. this is just a complicated way to say that I frequently add tags like "male sitting" or "female with missing eyes" or "embarrassed tomboy" and stuff like that. It works surprisingly well, because most of the time, if there are multiple characters in an image, there will be some difference between them that I can use to tag them separately. the main downside is that you have to add all of the proper parents for each of these specific "compound tags" that you create, but you only have to do it once for each, so it's not as bad as it might sound. ideally, if there were a way to add relationships (parents and siblings) to groups of tags all at once, this problem would be essentially solved in a way that's compatible with how Hydrus already works, but Dev seems very hesitant to add new features to the relationship system due to it being complex, so I wouldn't hold my breath on that being added.
>>16060 >but you only have to do it once for each, so it's not as bad as it might sound. The issue is, with this method, being thorough is unfeasible because there's too many possible combinations. The object oriented method anon described makes those combinations naturally part of a search, just like with regular tagging, which raises the limit on the number of tags in one theoretical compound tag to as much as you want. >if there were a way to add relationships (parents and siblings) to groups of tags all at once You can already do that. Have you not used the parent/sibling management windows? You can add dozens of tags to a single parent at once, or dozens of parents to a single tag, or create a web over multiple parents and children, though I don't see any good use cases for the latter. You can also highlight multiple tags anywhere else, right click them, and select "add parents" for groups of tags you already have pulled up that you don't want to re-enter in the parent management window.
>>16062 >You can already do that I don't mean group as in "a set of tags that I type in" I mean "a category of tags that I define now and works for all tags now and in the future". there's no way to define relationships like that. >being thorough is unfeasible because there's too many possible combinations this is why adding parents to groups (or you could call them categories or sets to make it more clear) would help. you would tag the "sets" then any tags in those sets automatically get all the appropriate relationships. this is the simplest feature I can think of that would solve the issue, since it wouldn't change the definition of what a "tag" means like adding objects to them would. it would just expand what kinds of relationships you can make. I agree that being able to tag objects would probably be more intuitive for the user than what I'm describing, but I have no idea how something like that would even look as far as UI and management goes. for each file, you'd have to remember which "object" (in the file) is bound to which object (in the db) to make sure that you're not accidentally adding the wrong tags to a character. I just don't know how something like that could work without being so advanced that no one would ever use it. although to be clear, I doubt either of these will ever be implemented. the idea I'm talking about adds to the relationship system in a big way, and the object idea would be a fundamental change to tags, which is the most basic and core feature of hydrus, and kind of the entire point of hydrus to begin with. I'll live with that though. what I'm doing now has a lot of busy-work unfortunately, but it does work right now. >being thorough is unfeasible oh right, and to be clear, it is a lot of work, but it's not truly "unfeasible" because I'm doing it now... and it works. it took a while to "rig up" all the relationships, and unless you really care searching precisely (remember that tags are for searching) I wouldn't recommend it, but now I can basically tag who's doing what to who and who has what hair or what height or skin tone while they're doing it, and then 30 parents get added at once. /ramble
Hi! has somebody ran into this issue? v572, 2024-08-22 19:59:00: shutdown error v572, 2024-08-22 19:59:00: A serious error occurred while trying to exit the program. Its traceback may be shown next. It should have also been written to client.log. You may need to quit the program from task manager. v572, 2024-08-22 19:59:01: shutdown error v572, 2024-08-22 19:59:01: Traceback (most recent call last): File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/ClientController.py", line 2128, in ShutdownView self.DoIdleShutdownWork() File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/ClientController.py", line 768, in DoIdleShutdownWork self.MaintainDB( maintenance_mode = HC.MAINTENANCE_SHUTDOWN, stop_time = stop_time ) File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/ClientController.py", line 1428, in MaintainDB self.WriteSynchronous( 'maintain_similar_files_tree', maintenance_mode = maintenance_mode, stop_time = tree_stop_time ) File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusController.py", line 982, in WriteSynchronous return self._Write( action, True, *args, **kwargs ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusController.py", line 244, in _Write result = self.db.Write( action, synchronous, *args, **kwargs ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusDB.py", line 956, in Write if synchronous: return job.GetResult() ^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusData.py", line 1387, in GetResult raise e hydrus.core.HydrusExceptions.DBException: error: unpack requires a buffer of 8 bytes Database Traceback (most recent call last): File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusDB.py", line 619, in _ProcessJob result = self._Write( action, *args, **kwargs ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/db/ClientDB.py", line 10718, in _Write elif action == 'maintain_similar_files_tree': self.modules_similar_files.MaintainTree( *args, **kwargs ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/db/ClientDBSimilarFiles.py", line 707, in MaintainTree self._RegenerateBranch( job_status, biggest_perceptual_hash_id ) File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/db/ClientDBSimilarFiles.py", line 455, in _RegenerateBranch ( new_perceptual_hash_id, new_perceptual_hash ) = self._PopBestRootNode( useful_nodes ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/db/ClientDBSimilarFiles.py", line 345, in _PopBestRootNode views = sorted( ( HydrusData.Get64BitHammingDistance( v_perceptual_hash, s_perceptual_hash ) for ( s_id, s_perceptual_hash ) in sample if v_id != s_id ) ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/client/db/ClientDBSimilarFiles.py", line 345, in <genexpr> views = sorted( ( HydrusData.Get64BitHammingDistance( v_perceptual_hash, s_perceptual_hash ) for ( s_id, s_perceptual_hash ) in sample if v_id != s_id ) ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/jzbwvkhsw4izd1q9yj9sk5n07hx4iq63-python3.11-hydrus-572/lib/python3.11/site-packages/hydrus/core/HydrusData.py", line 392, in Get64BitHammingDistance return bin( struct.unpack( '!Q', perceptual_hash1 )[0] ^ struct.unpack( '!Q', perceptual_hash2 )[0] ).count( '1' ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ struct.error: unpack requires a buffer of 8 bytes
>>16064 I haven't that I can recall but >v572 I wouldn't remember that many versions back. v572 was released in April
The below discussion made me notice a simple feature request I'd like. Can the display window for currently searched tags get one of those "mouse hover and drag" borders at the bottom? I'm sure it can probably be adjusted somewhere else, but this would be more intuitive for when you're search lots of tags at once and want to see them all at once. Second pic related. >>16063 > "a category of tags that I define now and works for all tags now and in the future". there's no way to define relationships like that. The closest I can think of is making all tags in a namespace have the same parent. Like, all pokemon:* tags would have series/IP/whatever:pokemon as their parent. But I don't think that's a good fit for compound tags, which can easily encompass multiple namespaces. >this is why adding parents to groups (or you could call them categories or sets to make it more clear) would help. you would tag the "sets" then any tags in those sets automatically get all the appropriate relationships. I don't understand how you're defining this as different from what I described. All you're doing is adding another parent in the chain essentially from what I can tell. >Group of tags -> set -> parent How is this "set" different from a parent itself? >I agree that being able to tag objects would probably be more intuitive for the user than what I'm describing, but I have no idea how something like that would even look as far as UI and management goes. for each file, you'd have to remember which "object" (in the file) is bound to which object (in the db) to make sure that you're not accidentally adding the wrong tags to a character. I just don't know how something like that could work without being so advanced that no one would ever use it. The way I imagine it is simple. For tagging new files without tags, add buttons and/or shortcuts to add or delete objects. In the list of tags for a file in the tag manager, each object would have one line just like a tag, and would have tags beneath it with an slight indentation indicating they belong to that object. To add tags to the object, simply click it in the list of tags. Adding a new object should automatically select that object. As long as the object is selected, any tags entered would apply to it, instead of just generally to the file. Objects would be placed either at the beginning or end of the tag list with a sort toggle. For viewing tags outside the tag manager, you would be able to toggle if objects are displayed or all tags are displayed normally. Also, objects should be collapsible just like sibling and parent tag displays. >you'd have to remember which "object" (in the file) is bound to which object (in the db) to make sure that you're not accidentally adding the wrong tags to a character. Easy, just add the character's tag to the object. If it's an unknown or nameless character, there's really no workaround. Just pay attention to what tags are already in an object as you enter new ones so you don't start slapping them on the wrong object. For searching, firstly similar to parent child relationships, any tag within an object would apply regularly to file as well. So if an object is tagged as something, the file is also effectively tagged the same. You could add a button to enter object search mode, or a new key shortcut similar to how you can for creating an OR tag search. In object search mode, the object being searched will display in the tag search window just like in the tag manager. Taking up one line with tags applied to it being below it and slightly indented. All tags entered apply to the object until you leave the mode, either with the button, the key shortcut or clicking outside the object and its tags within the search window. Using the hotkey or button again will add another object to the search, and clicking on the object or any of its tags in the search window re-enters object mode for that object. I think all this would be very intuitive, but the issue is the background logic, the actual implementation of such a system, might be too difficult. I don't think it should interfere greatly with processing times, but I'm no dev. It would definitely be a massive upgrade to tagging structure that would be one more thing putting Hydrus lightyears ahead of any booru. I wouldn't be surprised if boorus adopted such a system, but on further thought, this thread can't be the first place to have thought of this, so maybe it's not practical?
>>16066 >For tagging new files without tags, add buttons and/or shortcuts to add or delete objects. Scratch that. Only an adding button is needed at most. Deletion is just like tags. Just double click.
>>15988 >Thank you! Should be fixed now, let me know if it gives any more trouble. Works perfectly now as far i can tell on my end, good job!
>>15996 >Thanks for answering, also all other people that answered! Hydev, could you answer 1b too please, would be interesting to know. Hydev, i wanna add another question regarding this topic, 1c: When you install the .exe, on the second installation page 'select components' there is a drop down menu that lets you chose 'install' or 'extract only'. Do i understand correctly, that 'extract only' does the same as the 'Extract.only.zip'? So i don't need the .zip if i use it for an external drive for instance and can only keep the .exe and save some incredible ~300mb (since i keep the latest versions of programms i use)? Afaik the only differences between installing and extracting is, that install does overwrite everything that it needs to overwrite, adds new fies and also deletes unused old files, whereas extraction overwrites/adds too, but doesn't delete any files, even if not used. So in rare cases a 'clean install' is needed. That's basically it?
Is there any way to edit the gallery-dl config file for hydownloader? Like the part where you can add extra sites (picrel)?
>>15996 >1b. Can the .exe and the .zip versions update each other? Means if i have an .exe install, can i extract the .zip over it and vice versa? For example if i have the .zip version on an external hdd/sdd (E:) and i chose to move it to my main disc on C:, can i then update this version with the .exe and if so, would it have any implications for the update process? For instance, a 'clean install' was necessary for some .zip versions in the past. If i would start updating the .exe over the .zip , is that already a clean install like the guide is suggesting for the .exe version? Yeah, should be fine. The installer (which is InnoSetup, if that's helpful) basically just does most of a 'clean install' and then does some Windows system stuff to A) add the shortcuts to start menu and B) add some stuff to the registry for uninstall purposes. It is all very simple, as far as these things go (mostly since I am not expert in that stuff), and does not affect the running of hydrus itself in any way. Worst case is Windows might get confused about your actual install dir, or an uninstall might miss some files to clean up. The hydrus exe never checks your registry or AppData folder or any of that shit. It is always running in 'portable mode', so if the install dir looks like an install dir, it'll run. The actual install script: https://github.com/hydrusnetwork/hydrus/blob/master/static/build_files/windows/InnoSetup.iss >That will have tech that can replace namespaces too right? Yeah. I think I've given up on the idea of a soft virtualised 'namespace sibling'. The logic would be possible but almost certainly a gigantic pain. We'll see if hard-replace covers most of the situations we care for. PTR is awaiting a huge 'artist:' -> 'creator:' migration in a similar way. >>16004 >>15998 For my part: A) don't worry too much about loops, my code is very strict about such things and generally won't allow you to enter one as a human. B) if a loop does get in to the db (mostly this means legacy data from the PTR, back from the days when it was easier to add loops), the database breaks the loop pseudorandomly, so while I'm confident it is robust, it unfortunately can give some dumb results (e.g. giving 'tricorder' to every 'star trek') if fucked with Thankfully, the sibling and parents systems are now perfectly 'virtual', so if you do get any errors, you can go into the siblings/parents dialog and fix stuff. The very recent asynchronous work I did on these dialogs exposes bad/loop pairs much better and helps you to break them manually to fix these old issues. I can't promise these systems are always amazing though. The logic has knocked me about for years now. Every time I think they are simple somewhere, there's a new set of headaches to deal with. Let me know if you run into any crazy slowdowns or miscounts.
>>16005 Thanks, I was able to reproduce the error from your post here and I think I have it fixed. I misunderstood exactly what was going on here. The new system in v587 basically goes 'try it in local locale, if that fails try it in english', assuming that the non-locale fallback is going to be english 99% of the time. I hope this covers most error situations, but let me know otherwise. Maybe japanese would be a good second fallback, although my guess is their timestamps are a subset of 'en'. This was a surprise btw, I thought dateparser worked on anything, but it caring about locale is odd. You can say hace 3 horas or whatever and it'll parse in 'en', so I guess it is a bit mixed. Anyway, let me know if you still have any problems here. >>16010 >>16016 Sorry for the trouble. I know macOS has had some pretty weird 100% CPU issues when trying to position certain elements in the media viewer before. This has usually been related to (historically) fullscreen borderless modes or, more importantly, mpv embeds. When you say you have 'mpv turned off', does that mean that under options->media, your animations/video/audio are set to 'native viewer' or 'open externally button'? If your source macOS install has somehow discovered an mpv .so file to use, I suspect it is trying to load by default and you are running into the current state of mpv dev in macOS, which is: I am afraid it is broken 100% CPU. If you are definitely set to not use mpv for any media, but you are still getting 100% CPU on a non-image click, then yeah I think let's check the profile logs. After you generate a profile, go file->open->database directory, and it should be there, a .log file. You can pastebin it here or zip it up and email it to me. Give it a look, but it shouldn't have any identifying information. If it is hundreds of MB, then see if you can cut out any repeated section and just paste that. In any case, let me know how you get on. >>16018 I am afraid I cannot reproduce this--when I try to resize, it is a little jank but it does scale things down. Can you say: A) which OS you are, B) which options->style style and stylesheet you are? >>16028 If the URL is already on a file, does pic related do it? Or are we talking file urls that might not be in this menu list? Adding this function to the manage urls dialog is a good idea though, so I'll add that. >>16029 As >>16031 says, there are some third-party apps that can talk to the hydrus 'Client API' to wrap your personal PC's collection in a web booru layer, and some are very good, but I doubt hydrus will ever run natively on android, I'm afraid. Even if a team of phone-competent programmers took up the work, it would be too much work, like it might need a complete UI overhaul since I don't think Qt will run on android, and the kind of stuff hydrus wants to do is probably more than a phone wants to give permission for or can generally handle. Can you even invoke ffmpeg on a phone? Hydrus is a PC program.
>>16076 >I don't think Qt will run on android https://doc.qt.io/qt-6/android.html
>>16033 Thank you, this was useful. I don't see anything super horrible in database terms, I'm sorry to say, but I do see that some maintenance jobs are all clustered up and causing what looks like a traffic jam. Please try turning off: - duplicates page->preparation->cog icon->search for potential duplicates... - tags->sibling/parent sync->do work in normal time (although note this will stop quick recalc of parents, which I imagine you want) Those two seem to be the biggest problems here, although there's something else I can't identify that's causing the actual UI lag. Maybe it is some UI-update reporting after these jobs are done. I suspect it is potential duplicates mostly knocking you about, but let me know what happens when you turn both those off. If things are still garbage, might be worth doing another profile. I do not see a 25 minute delay, or anything beyond 4 seconds, so unless there's some very subtle pile-up or deadlock going on here, I don't think we captured it this round. >>16037 Probably not, I'm afraid. I'm subject to whatever simple/popular libraries can support, so if PIL/Pillow can read it, I can show it, but otherwise we are hacking some bullshit. I think for .clip files we do something where we read the file itself (it is secretly a .zip) and then extract a .png preview either as a raw file, or maybe we extract it from an internal sqlite file inside the zip. So we are cheating. If there's a package an pypi that can read clips natively to a raw bmp, I can probably figure out an answer here, but these 'rich' application formats that have multiple layers and all sorts of vector effects and things are probably just too complicated for us to show 'properly'. At least for now! >>16056 >>16057 >>16058 >>16059 >>16060 >>16062 >>16063 My general thoughts on this are: it sounds like a neat idea, but I think the technical requirements and the endpoint workflow make it not worth it. Others have thought of this, with different solutions like coordinates for tags (like booru translation boxes), or sibling/parent-like tag relationships, or nested namespaces in some sadpanda male:penis sort of way, and ultimately, in the end-state, I think it means thousands of hours of extra programming and tagging work to shrink a results set from 17 files to 3. It is easier just to apply your human eyes at this level. For common situations, your 'male sitting' answers are the way, I think. Often mixed with parents. Maybe in future, if we end up with models that can auto-tag in essentially zero time, we could explore richer tagging metadata, but for now I'm at my limit of capability with siblings and parents. I won't try anything more complicated, and I have no idea what the UI workflow for this sort of stuff would be. How would you enter a search phrase for this stuff simply and quickly, and what would the UI look like? There is a (tempting) danger in autistic navel-gazing, in a project like hydrus, and we are wise to shake ourselves out of it. Don't try to create a utopian mind palace, just try to add some simple tags that apply to your real-world problems. The two master rules of not going crazy: 1) Tags are for searching, not describing 2) Only tag what you personally search for
>>16077 Sorry, I meant 'our weird Qt situation'. I'm on python Qt, not the normal C++ Qt, which, if it is possible to run on android, is I'm sure an absolute nightmare. And then I do some weird shit that may not be supported by whatever Android and Android-Qt allows. Trying to mishmash all our bullshit into their Java wrapper is probably not a recipe for success, not to mention reworking all my duct-tape PC centric code to phone-acceptable stuff, so you'd probably be looking at a large rewrite anyway even without the python/C++ issues. Bottom line is we absolutely cannot import the hydrus codebase to an android Qt environment and expect anything to work without a ton of work, and, most importantly, expertise from phone developers. If you were going to do it, you might as well rewrite the whole thing in Electron or something. Native android hydrus from me is a technical no-go. >>16064 Thank you for this report. You somehow got a bad phash in your database. I will see if I can write some better error handling to recover from this situation. You might like to check 'help my db is broke.txt' in the install_dir/db directory. I am not saying definitively that your database is broke, but you might like to just run whatever the Linux version of chkdsk is, and crystaldiskinfo, to make sure your hard drives are healthy. This bad hash might be the result of a hard drive blip. It would be worth running the 'pragma check_integrity' thing on your client.db as well--just check the document. >>16069 >When you install the .exe, on the second installation page 'select components' there is a drop down menu that lets you chose 'install' or 'extract only'. Do i understand correctly, that 'extract only' does the same as the 'Extract.only.zip'? So i don't need the .zip if i use it for an external drive for instance and can only keep the .exe and save some incredible ~300mb (since i keep the latest versions of programms i use)? Afaik the only differences between installing and extracting is, that install does overwrite everything that it needs to overwrite, adds new fies and also deletes unused old files, whereas extraction overwrites/adds too, but doesn't delete any files, even if not used. So in rare cases a 'clean install' is needed. That's basically it? I think you are basically exactly correct. The only thing is I think the pseudo-clean-install, which is the 'InstallDelete' section here https://github.com/hydrusnetwork/hydrus/blob/master/static/build_files/windows/InnoSetup.iss happens in the 'extract only' case too. I'm not sure if there is a way to turn this off, so I guess 'extract only', which does not do the two 'desktopicons' and 'programgroupicons' 'Tasks' and does not set up Uninstall info, really means 'install but do not register with OS'. Anyway, for your purposes yeah you can treat it as the same as the zip. It does what you want.
Minor UI annoyance but there ought to be an "all my files" and "all local files" buttons under the pages navbar menu: pages -> new file search page -> all my files (not present) Like there is under the new page picker popup: pages -> pick a new page... -> popup [ file search -> all my files ]
Is it possible to add catbox collections as supported in the regular import page? They easily work if you use the "download all files linked by images in page" setting in the simple downloader but don't work on the normal url downloader page. I only have one catbox collection link on hand and it's NSFW WIP loli so idk if you want to use it for tests or not but it's here if you do: https://catbox.moe/c/5bub9c I think you need an account to make collections and I can't be bothered to do that.
>>16078 I think the freeze has something to do with sibling/parent sync in normal time, I had it off for a while and no freezes. I turned it back on and it just happened again. I wasn't profiling so I don't have any data this time.
>>16083 Oh I should mention, it's specifically in the "choose a reason for this parent/sibling" dialogue
>>16085 Yup, definitely related to the "choose a reason" PTR dialog box. I also noticed that when I sigterm the initial process it cleans up the db just fine but the UI doesn't fully go away, I have to sigkill the process again for it to leave properly.
Is there a fast way to download an image from Twitter together with author tag and post url?
Is there an easy way to export tags with the spaces replaced with underscores? I want to upload a bunch files to a booru
(264.75 KB 680x794 furry bait.png)

I will remove "lore:trans (lore)" tags from any PTR files that are merely dickgirls, and no amount of wishful thinking can stop me. >>16088 gallery-dl, as always, though it needs an account for NSFW and a configuration file if you want the author tag and post url. I believe hydownloader has integration for gallery-dl metadata files. (it's not a fast configuration) But there's also shitter downloader plugins that add author tag and post ID to the filename.
>>16078 Speaking of tagging daydreaming, would it be possible to append variables to every tag per image (defaulting to none)? I'm thinking "weight" and "confidence" floating variables. Default to 1.0, but a float from 0.0 to INF carried along with a tag when specified. 0.5 means it's weak, 2.0 means it's particularly strong, with no real intended "scale". This'd be more useful for the AI-classified future, but I've already wanted to use something similar to find images that are really strong representations of a tag. AI slop is already capable of inferring "strength" of certain tags, and there's no particular reason why either perceived weight (or at least confidence) couldn't also be carried by a database in the near future. Classifiers tell you their confidence already, it's just a matter of noting it down. It'd be handy to search for only confident detections, or to manually check low confidence ratings within the database itself. I don't know whether it'd be better to do this within Hydrus itself or yet another sister application. The real issue would be displaying these.
>>16076 >Can you say: A) which OS you are, B) which options->style style and stylesheet you are? I'm Fedora Linux 39, and my stylesheet is the default. I'm running from source and the Qt version is 6.6.0 I can trivially reproduce it because it happens every time. I restarted hydrus and it still happens I was wrong about it being both the parents and siblings pages. it's only parents. siblings seems to resize correctly
I had an ok week. I finished the list rewrite, so all multi-column lists across the program now populate and sort far quicker, particularly when they have tens or hundreds of thousands of items, and I fixed some bugs. The release should be as normal tomorrow. >>16093 Thanks--maybe fixed tomorrow, let me know how it goes. >>16083 >>16085 Thanks--maaaybe fixed tomorrow, let me know how it goes.
>>16076 >let me know if you still have any problems here. >checks locale in client >locale: Polish_Poland/pl_PL me: client, whats timestamp of this "7/18/2023 8:32:00AM"? (using converter in string processor) client: oh! it's "1689661920" >sets locale to "English (USA)" on windows >restarting client >locale: English_United States/en_US me: client, whats timestamp of this "7/18/2023 8:32:00AM"? client: oh! it's "1689661920" >sets locale to "Russian (Russia)" >restarting client >locale: Russian_Russia/ru_RU me: client, whats timestamp of this "7/18/2023 8:32:00AM"? client: *lags for 5-10 seconds* client: ERROR: Could not apply "datestring to timestamp: automatic" to string "7/18/2023 8:32:00AM": Sorry, could not parse that date! inb4 "am i a joke to you?" to be honest, i don't know either on how it ended up like that this thing is so weird
https://www.youtube.com/watch?v=X7OpjB_8sHQ windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v588/Hydrus.Network.588.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v588/Hydrus.Network.588.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v588/Hydrus.Network.588.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v588/Hydrus.Network.588.-.Linux.-.Executable.tar.zst I had an ok week. Multi-column lists work faster across the program. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I finished my list rewrite. Multi-column lists look and work exactly as they did before, but they initialise and sort faster. I still have some optimisation to do, but my test list of 170,000 items now sorts in about four seconds. More generally, many normal delete and insert events should have just a little less lag. I hope this makes dealing with large file logs and so on a bit less of a hassle! Otherwise, I fixed some visual bugs and cleaned up some similar files maintenance code. next week I want to optimise some db maintenance code and otherwise just do some simple cleanup.
>>16096 While on DB maintenance, could you please add an option to vacuum databases on another disk? I know it can be done with some hackery, and I've done it before, but it would be a lot more convenient to specify them explicitly. The default of "/tmp" for vacuuming large databases (like PTR mappings) also isn't great.
>>16094 >let me know how it goes. the parents window issue is still there in v588. the window won't shrink horizontally so the right side of what's supposed to be there just gets cut off.
Is there any way to migrate parents and siblings from one service to another only when at least one of the tags in each pair are actually present in "all my files"? I asked about this about a year and a half ago and you said that you had plans to implement a way to do that some time in the near future. I know it's probably not here yet, but I figured I'd ask again anyway. I stopped syncing with the ptr about 2 years ago, but I still have the service in my db because I don't want to add a whole bunch of relationships I'll never need to my local service, but I also don't want to just lose everything either. You're already able to do the equivalent of what I'm talking about with mappings, taking the mappings for just the files in "all my files" and discarding the rest. basically what I'm asking for is the same thing, but for relationships, where you take all the relationships that involve tags in "all my files" and discard the rest. it's not urgent of course, but it'd be nice to finally get rid of the service and hopefully shrink down my db in the process.
>>16081 Thanks. I'll probably hide this behind an option/advanced mode. I don't want to direct new users to those file services much as they often confuse. I also need to rewrite some layout GARBAGE I accidentally did the other week that broke the 3x3 buttons on the new page selector when there are less than four buttons to show. Also I'd really prefer to tie the favourite searches system into page selection. I open a new page and don't care about the file service since I'm just loading a favourite search anyway 95% of the time. >>16082 Sure, I'll check it out. Can't promise anything, but a quick look suggests this wouldn't be too tricky. >>16083 >>16085 >>16087 Thanks again for this. I investigated what could be causing a freeze during the 'enter a reason' text dialog, and the odd thing is that I discovered a possible way to enter a complete deadlock, i.e. the whole program freezes indefinitely, but not a way it could freeze for 25 seconds. So, maybe the deadlock wasn't happening the way I thought, and/or python/Qt had a way to get out of it (seeing some recursive call or similar), or I did not catch what was causing the delay here. Anyway, I fixed the major problem I saw. That said, I had a good look at your profile, and while I see some ~1.3 second database delays, I see nothing that would obviously cause a long delay from the db end, so I think we are looking at some UI traffic jam. Let me know how you get on and I'll keep working at it. >>16088 We have very good default support for single tweets. Just drag and drop the tweet URL onto your client and it should work, video too, with creator and URL and tweet text as the note. This works through the excellent fxtwitter and/or vxtwitter services. You can also just batch up the tweets you like and then paste into a hydrus urls downloader page. What we can't do is search twitter. (Elon now charges, I think, $5,000 a month for this) I personally use yt-dlp for many video tweets, too. It works out of the box for everything except 'nsfw', which needs login credentials. >>16089 Not right now, but I hope to add some tech around this subject over the next year or so. I want to finally push on 'here is a tag replace rule, apply it to everything', with particular attention to the underscore cases.
>>16092 That's a fun idea, and I agree a system like an automatic classifier would be the place to figure it out. When I first started the program, I expected to write all sorts of custom sort and presentation rules for number tags, stuff like a 'cup size' namespace that would convert from 'a-cup' to a sortable value, but ultimately it never panned out. The overhead of implementing these systems, and then that on the user of actually maintaining and editing them, always trumps the simplicity of 'huge breasts'. It would totally be possible to store tags with a confidence value. Instead of the classic (file_id | tag_id) mapping pair, you'd have (file_id | tag_id | confidence_float) or (file_id | tag_id | confidence_float | weight_float ). It would make a slightly bloatier database, but not dreadful. Probably not appropriate for the PTR or any other mass-share, since a couple billion floats will add up, but it could be good for a local store. I suppose you could start thinking about special search or display tech that said 'find me all the "skirt" with confidence > 0.8', or for weight 'find me all the "bikini babe" that consumes > 65% of the screen'. I think the confidence variable might be moot for storage, since I imagine we'll narrow down specific models, or perhaps all models, to a good confidence threshold and not broadly be interested in altering that after the fact, and thus the confidence is something we play with when we add tags, i.e. in deciding yes/no to add, not after they are added. But yeah I'd only think about this if we could generate these variables automatically. A human is never going to be able to generate reliable 'weight' numbers, nor have the patience to do that more than 50 times. The more I think about an automatic weight, the more I like it. >>16093 >>16098 Damn, thank you. Some of that might be the list being rude about its minimum width, so can you try right-clicking on the header and selecting 'reset default column widths...', and then I expect you'd need to close/reopen the dialog. Does that fix/improve anything, or does it want to be huge again? Ideally, when you make the dialog wider, all the spare space is going to be eaten and then surrendered by the 'note' column of that multi-column list, but if the other stuff gets wide, it might be screwing up the whole thing. There's a confluence of several kinds of shit layout code going on here, just years of me reinventing the wheel on 'man it would be better if dialog panels set their own scrollbars bro', and 'man it would be better if dialogs resized themselves dynamically bro' and it failing in certain situations. >>16095 Thanks, I'll poke around again. The language/region/locale parameters here are funny; they seem to, inside dateparser, to be converting to 'xx_XX' locale strings for a lookup dict, but when I actually looked I didn't find the classic 'en_US', so I opted for 'languages = [ 'en' ]'. I guess it is somehow getting tripped up with some bonkers 'en_RU' mapping or something. You obviously have your own experiences, but I'll state for my part that calendar calculations are the absolute worst thing I have ever come across in computing. Everyone involved in designing calendars and times and locales over the past ten thousand years seems to have been non-engineer astrology-brained. I was fucking boggled beyond belief to learn about this some time ago: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds There was the chance, finally, to make a non-fucked-up time format we could all sync to, literally just count one second for every second bro, and the lizardmen simply wouldn't permit it. >>16097 Very good idea. I will see what I can do. >>16099 Sorry, I don't think I have the tech to pull this off just yet. You might be able to figure out part of this with the Client API, but it would be a pain in the ass. We need better ways of selecting parents and siblings. I just made the dialogs work a hell of a lot faster, so perhaps we can start doing some better import/export logic there, with mappings lookups, and add the same to the 'migrate tags' dialog. I'll make a job to think about it.
>>16101 >can you try right-clicking on the header and selecting 'reset default column widths...', and then I expect you'd need to close/reopen the dialog. Does that fix/improve anything, or does it want to be huge again? Actually that fixed the problem, and it looks like it's not going back to cutting off when I reopen it! Nice and thanks! I wonder what column width really had to do with the issue though. I could see all the columns fine so that wasn't the problem. It was that the "frame" wasn't getting narrowed properly when the window's width was reduced, so the window was just cutting it off. it didn't seem to be related to the lists. A minimum width that's too high should never result in broken windows like that anyway. Is that an issue with Qt itself?
For the auto deduplication tech, will there be a way to "rank canonicity" or something? Put simply, an artist's official Pixiv is more canon than a booru page, so a pixel dupe with a Pixiv hash would be more "correct" and preferable than the booru hash. Does that make sense?
Sorry if you've already been asked this hydev, but would it be possible to add custom quick actions to the duplicate filter? One example I can think of is "this is a variant set" which would only transfer some tags over such as creator or series, but not all tags. Adding another quick action just for the 1% of images that are variant sets probably wouldn't be worth it for you but it would helpful for users to have the ability to do stuff like that.
Hey, I'm pretty sick, so no release tomorrow. v589 should be out on the 11th. Thanks everyone!
>>16112 Hope you get well soon, bro!
(23.09 KB 131x249 qt firefly girl layna sad.png)

>>16112 Ganbatte
>>16112 >I'm pretty sick F Get better devanon.
>>16112 I'm channeling my energy through the ether to allow you a faster recovery!
Is there a reason why the SHA256, Blurhash and Pixelhash show in parentheses but MD5, SHA1 and SHA512 don't? SHA512 is a bit longer and could maybe shortened by '...' before displaying in parentheses. I wanted to compare the filename from a sankakucomplex file to the hashes, to see if the filename is in fact one of those hashes. But i had to first copy the MD5 hash and paste somewhere to be able to compare. If it would display, i wouldn't need those extra steps. If it is possible, please make them display too Hydev. Wish u a fast recovery!
I recall reading here a month or 2 ago that there was a way to still have subscriptions for x/twitter but it'll only grab the first 10 or so tweets. I can't find it in the defaults or the cuddlebear repo though. does anyone know what that was? I'd like to have that hack solution and just force the subs to check more often.
>>16102 Great, I'm glad it was something simple in the end, even if it is all my fault. This was one of my 'reinvention of the wheel' moments of genius, I decided to make a new sort of dialog panel that would handle some sizing things more dynamically, and while I am happy that hydrus generally does not suffer from cramped UI that you often see in other techy 'dense' programs, and it generally successfully eats up monitor space for guys on 4k etc..., we do get the opposite problem here where if the internal widget in the panel is adamant that its minimum size is 1,200 px wide, then the dialog is going to force that and set scrollbars. My multi-column list is one of those things that will insist that its columns are the same size as the previous time it loaded, with the exception of the last column which is supposed to be the resizable one. I think in your case the 'note' column was entirely hidden behind the scrollbar. Basically resetting the column widths back to default allowed the dialog to refigure itself properly. I'm slowly fixing some of my bad layout decisions, some of which were spawned in the wx days, to more Qt standard, and I'm happy every time I do so. I'll keep working here, but I am also fond of a couple of the dumber ideas I've had for resizing dialogs, so we'll see where we end up. >>16109 Interesting idea. My intention for the future state of the duplicate comparison system (and the auto-resolution system, which will use the same tools), is to have a new 'metadata conditional' object that allows you to deeply customise comparison scoring. The metadata conditional is going to be an algebraic lego brick that will plug in all sorts of cases and will say 'file x has y property'. I think you'd be able to say 'if file A has a pixiv url and B has a booru url, give A +20 points', so I think you'd be able to do this in time under your own steam. I can't promise the system will be this clever for a while though--we'll be starting with simpler stuff like 'file A has >2x the number of pixels of B' and so on. >>16110 This is a very good idea, thank you. We need more customised control here, and will want more of it in future. I'll have a think. >>16115 >>16116 >>16117 >>16118 Thanks, no worries. Just one of those things that completely knocks you out a couple days. Doing 7/10 now. >>16119 Yeah, it is a silly thing but the media object that backs thumbnails in hydrus knows about the sha256 and pixel/blur hashes (the media object loads them from the db as it is created, since they are used for some UI stuff), but the md5 et al need to be fetched from the database on an as-needed basis. Since you are only ever looking at one file here with human eyes (and thus it won't be db-expensive), I'll make a job to populate that menu with a db request. I know how to do this quietly and quickly in the background these days. >>16120 I think it is fucked now and the guy(s) who were trying to maintain it just gave up. afaik it was using like one of those 'other tweets by this user' boxes that you see embedded in blogs to get those 10/20 tweets, and I guess that API route is now blocked or obscured. If you want to search tweets, you gotta give Elon $5,000 a month! https://developer.x.com/en/portal/petition/essential/basic-info?plan=pro I have a memory that vxtwitter or fxtwitter were looking to provide some easy search layer, but I think again it was ultimately blocked by obfuscation on twitter's end. Twitter are actively trying to make it difficult to search, so we are unlikely to see a good solution here.
>>16121 >Since you are only ever looking at one file here with human eyes (and thus it won't be db-expensive), I'll make a job to populate that menu with a db request. I know how to do this quietly and quickly in the background these days. Good stuff! πŸ‘
>>16100 Sorry for the delay in getting back about the UI hanging, I haven't gotten around to updating since it seemed that disabling the realtime parent/sibling sync seemed to completely stop the hanging but it happened again just recently so I finally upgraded and seems to be resolved now, thanks hydev!
Looks like idolcomplex added new pointless namespaces. I've seen anatomy, automatic, fashion, object, pose, and setting.
When you check the queries of a subscription it gives you an option to check only living ones, only dead ones, or both. It'd be cool if there was also an option to only check unpaused ones, since that's what I basically always want, and I got bit by accidentally running all the paused queries that I didn't want to run yet, by using that button and not realizing that it would check paused ones as well.
>>16125 I second this feature suggestion.
Hydev, give it to me straight: is Hydrus ever going to work on Wayland?
>>16128 Nothing is ever going to work on Wayland. It's by design.
hate to ask as I'm sure it's an option i'm overlooking but is there a way to change times from "2 months 9 days ago" to timestamps?
I had a good couple of weeks. I mostly worked on code cleanup and optimisation, so large clients should feel snappier. The release should be as normal tomorrow.
>>16128 >Wayland Another layer of abstraction to bloat even more Linux and complicating software maintenance. Keeping things as simple as possible would be better.
>>16132 easing maintenance and making session management simpler and more secure is exactly why Wayland exists. You have no idea why x11 needed to be replaced. >>16130 in the options go to "gui" then under misc, check "prefer ISO time"
https://www.youtube.com/watch?v=Ka4pfP2z8iA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v589/Hydrus.Network.589.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v589/Hydrus.Network.589.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v589/Hydrus.Network.589.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v589/Hydrus.Network.589.-.Linux.-.Executable.tar.zst I had a good couple of weeks mostly cleaning code and optimising things. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights If you have a lot of files, your database update may take a couple minutes this week. Some users have had unoptimised similar-files search for a while, and this fixes it. If you have a lot of tag siblings and parents (e.g. if you sync with the PTR), I have reduced the lag around sibling/parent processing significantly. There's a cache that previously had to be regenerated on every sibling/parent change, and for the PTR that could take 1.3 seconds, now it just updates in a couple milliseconds. If I have been working with you on lag issues, and we discovered that shutting off 'normal time' sibling/parent work fixed it, try turning it back on this week and let me know how it goes. If you do prefer manual control of this, there's also a new 'sync now' menu entry under tags->sibling/parent sync that basically slams all the 'work hard' buttons and functions as a catch-all 'do all outstanding work now'. A bunch of big lists like the 'file log' should now initialise and sort a little bit faster, particularly when you are pushing 10,000+ items. I fixed the 'page chooser' dialog's 3x3 layout, which was accidentally un-done in a recent layout overhaul. This thing is stupid, but I'm still fond of it. There's also some new checkboxes under options>pages that govern whether 'my files' and the more advanced 'local files' will show up in the 'files' choice. The safebooru parser should stop grabbing '?' tags, and catbox collection URLs are now parseable. next week I want to get some meaty work done, either chipping away at dynamic file storage or duplicate auto-resolution.
>>16134 my I had to copy my folder but some db file is missed. as a result all pics are gone. But they are still in db folder, I just can't see them when I open program. How could I restore them :(
Can I generate additional URLs to files already downloaded? For example so that files downloaded through site API would be marked in browser extension on corresponding site regular web UI. Data (IDs) required to construct web URLs is available within API response.
when the preview frame is completely collapsed, it'd be cool if it didn't load the preview into the frame. There are times where I quickly flick through many files, and the constant loading of the previews lags hydrus a bit, and it's especially annoying if they're videos with sound. I'm guessing that Hydrus knows when the frame is collapsed, because it snaps shut rather than sliding shut gradually. it's not a big deal, but this would be nice.
I accidentally closed a page of 52 4chan watchers, most of them dead. I have a database backup with the page. How do I restore the source data?
Been using Hydrus for years as an early adopter and supported on Patreon when I could afford to do so. You're a fucking saint for continuing development for as long as you have and as consistently as you have. (And a thank you to all the other people who have chipped in to help as Hydrus has grown beyond a single-developer project). Just wanted you to know that I managed to update from v237 without any issues although it did take a while going 20-30 versions at a time. Anyways I have a general question for anyone who might know the answer. What would be the easiest way to convert/optimize Hydrus files that are already in the db without fucking up my db? I want to make use of newer formats for a lot of my files without having to re-import and re-tag everything. Thanks.
>>16140 the dumbshit way would be to run an external script, re-import everything, and use the appropriately configured duplicate scanner to port everything over, it should all be either pixel duplicate or very close distance
I've been using the same path folder since version... I don't know, 420? and I noticed that when I try to run hydrus_server.exe I get this error (pic 1). Running that .exe from a fresh install seems to run fine (pic 2). Can I just backup my database and move it to a fresh install and move on with life?
>>16123 Hell yeah, should be even better in v589, let me know how things continue to go. >>16125 >>16126 Thanks, I will fix this! I'm not sure if it is even appropriate to ever apply that to paused subs, but I'll have a play around and see what makes best sense. >>16128 I am not a Linux guy, and I certainly don't have a Wayland machine to test with, so I very much cannot promise anything, outside of some deeply informed bug report that says 'oh I happen to know this Wayland-Qt lore, you need to change how this resize event is handled', but it working incidentally. There is another guy who I trust to make pulls to the hydrus codebase who has expressed interest in debugging different flavours of Linux, so it is possible he will figure out what is going on, but my general attitude is that my code is duct taped shit, and the python-Qt environment is pretty hurdy gurdy on its own, and then you add whatever new UI rules Wayland is trying, and I just can't guarantee anything clever like mpv is going to work well. That said, Qt is getting better, on average, with every new version, and its integrations into different OSes is improving too, so whatever the hell is going wrong on x or y system will, over the coming years, get less busted. It won't be much by my hand though! >>16133 >>16130 I hope to improve this in future btw, as this option doesn't cover all timestamps (and the system is broadly more complicated than one checbox can cover). I want to write a thing that'll nicely do a tooltip of the reverse, so you'll hover over '3 months ago', and it'll have the date as tooltip, and vice versa. Pain in the ass to do, but I want to improve how times render everywhere. >>16135 Sorry for the trouble. Are you absolutely, definitely sure you cannot recover that .db file? Or a backup, even an old one? Did you get the 'hey this looks like the first time you have run the program' when you booted up for the first time, or have you had a whole load of missing file/folder errors? If the missing file was client.db, I'm afraid your database is probable unrecoverable and you are starting back from square one. If you are ok with that, and you really do just want to recover the files in your client_files structure, then I think you probably want to look at 'install_dir/db/help my media files are broke.txt'. You'll basically be running the 'clear orphan files' command to get everything cleanly out, and then reimporting. BUT you really should try to recover your old database if you can. All your archive/inbox data and tags and things are probably lost as well, if you haven't got those files. 'help my db is broke.txt' in the same directory may have some useful background reading, since I think you may need to diagnose exactly what went wrong here more. Let me know if I can help any more. If you only missed one db file, then you may be operating on a mish-mash of old and new db files and there may be more work to do to clean things up before you recover, so let me know how you get on. >>16136 If you have URLs that you want to associate with files via the API, this is the command you want: https://hydrusnetwork.github.io/hydrus/developer_api.html#add_urls_associate_url
>>16137 Damn, this is not supposed to happen. I will examine how it tests this, I think the preview correctly knows it shouldn't load video if you hide it by double-clicking the slider resize thing (which removes it entirely rather than sizing it to 0px), but I wonder, if you manually size it to 0px, whether it still thinks it is loaded and should be doing things? I'll check it out. >>16138 If you have the page in a session backup (pages->sessions->append session backup): - Pause all network traffic under network->pause - Do append session backup and load the old session, it should appear in its own contained named page of pages - Drag and drop the watcher from the backup to your real session - Close the rest of the old session page of pages - Resume all network traffic. If you only have the page in a backup database file: - Open the backup client database by using the -d="path/to/db" launch parameter, as here: https://hydrusnetwork.github.io/hydrus/launch_arguments.html#-d_db_dir_--db_dir_db_dir - Go to the watcher page, and ctrl+a the list, then right-click->copy urls - Open a new watcher in your real client - Paste the URLs in For the second case, I'm afraid you can't simple move a page from one db to another, so just pasting the URLs back in should get you most of the way back. If you were not aware of the first case, have a poke around your session backups. The program makes a save on every client exit, so that may reach back a week or two if you check the timestamps. >>16140 Thank you for your support; I am glad you like it! It is cool the v237 update went ok. I think >16141 is the way for now. Export, convert, and reimport. I would recommend against it though just because it would be a pain in the ass, and say 'wait three to five years', because I expect hydrus to get much better auto-duplicate resolution tech, and with that it'll get internal file conversion tech. If and when, inshallah, we get Jpeg XL adoption, or some similar 'good' format, I foresee most of the internet doing a bunch of (possible AI-assisted) upscaling, HDR-ification, or just general 1-to-1 conversion for space-saving conversion from all the old sRGB jpegs and pngs, and hydrus will have a variety of ways of supporting that through an 'exe manager' system (search previous hydrus threads for more on this) that will call some external ffmpeg-like file converter, re-import the file output, and apply sensible metadata merge and de-dupe tech all in a native pipeline. When this will happen, I do not know, so if you want it done more promptly, maybe you might like to play around with 100 files now and see how much of a pain it is. I'd be interested to know the results if you do try it. My background rule on this, btw, is the hard drives are cheaper than man hours, so if your intention here is to save hard drive space, it is probably easier just to buy another 8TB drive than spend a hundred boring-ass human man hours on the conversion. >>16142 Sorry for the trouble. There should be a fairly simple fix here, no worries. Sometimes when I change the program, I need to change the database structure too. This is a very complicated thing, and I cannot guarantee that a change I make today will still make mathematical sense a year from now (this is called 'bit rot', and aggravated by me being a solo dev), so when your client wants to update from v520, I have a check first that says 'are we recent to the time this database was current, or is it used to code that is years old?'. If it was years ago, it dumps out and says 'please try an earlier version'. Thus, if you want to update from a very old version, like v300->v500, I ask that you not do it in one go, but in increments of 10-30, so you'd try doing v300->v330, with one full boot and then an exit, and then you'd try v330->v360, and so on, until you are up to the version you want to get to. That's what the error in the first screenshot is talking about. Please check here for more info: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#big_updates So, please try 'updating' to v535 or so and see if that works. Then try v558, and then v571. As that help document talks about, you may like to do clean installs too. Let me know how you get on!
>>16147 Thanks for the response and for what little it might be worth I find that .avif almost always outperforms .jxl and is my preferred format now it just sucks the support is still largely missing outside of browsers (and Hydrus! :D) > My background rule on this, btw, is the hard drives are cheaper than man hours, so if your intention here is to save hard drive space, it is probably easier just to buy another 8TB drive than spend a hundred boring-ass human man hours on the conversion. This is precisely why I haven't bothered but I'm hitting a point where I'm going to have to setup a NAS or something instead of using plug'n'play drives. Hardware solutions are still much easier than software ones - so if I need to keep throwing hardware at it I will.
>>16147 >>>16138 >If you have the page in a session backup (pages->sessions->append session backup): It disappeared earlier. >If you only have the page in a backup database file: >- Paste the URLs in I did that around the same time I posted the question, but I am pretty sure it only worked with desuarchive. Thankfully, desuarchive was still accessible despite being both protected by Cloudflare and blocked.
I really wish we had a button maybe next to the "paste" and "favourite" to just clear the current search. Right now I'm using the "empty page" option on the favourites drop-down menu but that feels like a click too many.
>>16150 I normally just highlight everything and double click.
>>16147 >>16142 (me) Thank you for the quick reply and thank you again for the amazing software! So, I managed to update my client to 589, and it's working great! One question though, since you mentioned clean installs: now that I have a working client + database backup, would it be better to just do a clean install and get that backup running? I noticed I have lots of old folders in my main hydrus path that maybe it's just "junk" that it's lying around from previous versions. What do you think?
>>16151 Yeah, this seems to be the only way with multiple "and"s. It would be nice to have a "clear all" button like on a calculator.
So, I started using Hydrus a few days ago, and I have LOTS of things to manually tag. I'm also going to get tons of tags from an autotagger (I'm using this model for now https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2), but I also want to add certain tags that are unique enough for me to know them of the top of my head. How do you guys usually go around with this, do you have a set of tags that you choose from? Do you try to keep tags consistent with danbooru's tag styles?
>>16154 I do all manual tags except for some artist tagging, for which I have my own personal tags auto-applied to artist subscriptions. I stagger my activity between other tasks, usually chores, collecting new files from new non-subscription artists, or playing videogames. I import about 200-300 files at a time and then wait until I've finished tagging them before the next batch of non-subscription imports. This is my workflow. >Eros:non-erotic, semi-erotic, erotic Goes fairly quickly. Semi-erotic for things that have some erotic appeal but have something preventing me from enjoying wholly as an erotic work. This is usually applied to erotic jokes. >Lewdness: safe, lewd, explicit Explicit is a parent tag of whatever explicit body part is shown for the namespace exposure:nipples, cock, pussy, et cetera. During this I also tag if a pussy is a close slit, and if it's outline is visible through clothes, if the clitoris or urethra are visible, and if the clit hood is protruding. >Character count:solo, two-ten, 1-10girls/boys/futas/shemales, many girls/boys/futas/shemales, many, none >Character:name and Origin:anime, manga, videogame, cartoon, comic book, source material, etc Character: tags have IP: tags as parents, which is the equivalent of a series tag but makes more sense to me because not everything has multiple entries and is thus technically a series. IP:tags often have Genre: tags as parents. Once I went through and created most of these, I don't have to touch them very often. >Hair:long, very long. short, styles and Colors:black and white, redscale, bluescale, etc. and Hint of Color:hair, body, background, eyes, etc. This is the first really tedious step, and the next three are all fairly tedious as well. I group Colors: and Hint of Color: in this step since it goes quickly and is necessary to prepare for the next three steps >Eyes:color, number of eyes, kind of eyes, kind of pupils >Clothes color: and clothes under color: >Clothes:alternate outfit, nude >Body:trueflat, very small/small/large/very large breasts, small/large/very large cock, large nipples, loli >Frame:face, upper body, lower body, face cropped out, multipanel, character count:duplicates I define character count:duplicates as any instance of multiple copies of the same character that don't exist in the same physical timespace like clones, nor in any particular sequence of events. I put it here instead of with the normal character count because I kept forgetting it when it was originally there. >Creator: For anything not autotagged with a subscription. Usually goes very fast. >Folder specific tags / sets Anything miscellaneous that I easily mass tag large portions of the current batch of files with. Usually has a strong alignment with Character: tags. Set: tags are unique to a group of sequential images. >Individual tagging I set up three tabs each limited to the current import batch by tagging any subscription imports that came up in the meantime as "Set: temp #". One tab for all sets, one tab for all alternate groups not in a set, and one tab for remains after. I go through them in this order tagging thoroughly 5 files at a time. If less than 5 are left at the end, I apply a set: temp # tag to them until the next batch is ready for individual tagging. I've tagged about 31300 files so far. I have approximately 3200 to go. But I am also gathering new files from artists of interest until I am out of artists, then new files for characters of interest until that backlog is finished, then fetishes of interest until that backlog is finished. I only make subs for artists of interest, and once done I'll stop mass gathering files on the regular. I've been going for about two years I think. I applied quite nearly all my ideal tags with rare exceptions, and if I need to change a tag, I go through the effort of exchanging it for the new ideal tag instead of using siblings. I do not trust boorus nor AI to tag files to my standards and tastes.
>>16155 >>Character:name and Origin:anime, manga, videogame, cartoon, comic book, source material, etc <Character: tags have IP: tags as parents, which is the equivalent of a series tag but makes more sense to me because not everything has multiple entries and is thus technically a series. IP:tags often have Genre: tags as parents. Once I went through and created most of these, I don't have to touch them very often. Also, it may seem like it would be simple to make Origin: a parent of most IP: tags. But due to the existence of adaptations I find characters may often belong to both or simply just one of the origin: tags in a way that doesn't warrant creating a separate Character: or IP: tag to me.
>>16154 When I started using Hydrus I sat down and made a list of about 160~ types of tags and I ONLY use those tags. There's things that don't count under namespaces (eg: character names, IP, studio name, artist name, etc.) "hair color" is treated as a single tag even though it could be "white_hair|blue_hair|red_hair|purple_hair" etc. Stick with the colors of the rainbow you don't want to end up with tags like "chartreuse_hair" just use "green_hair". Like the other user I have a "lewdness" tag that I use for "safe|ecchi|hentai". I avoid using autotagger and the PTR because most people suck at tagging and you end up with many useless tags and a lot of tag clutter.
Any Linux users have a suggestion for backing up my hydrus client files (NOT the database)? I have a lot of stuff but I do have a server with plenty of storage. I'm just not very familiar with rsync or others to know what flags are good.
>>16154 I basically only tag rating, character, or series namespaces. I get most of my tags from the PTR or parse them from sites then upload them to the PTR. I follow the PTR's guidelines closely which is similar to gel/danbooru.
>>16157 >"hair color" is treated as a single tag even though it could be "white_hair|blue_hair|red_hair|purple_hair" etc. > Stick with the colors of the rainbow you don't want to end up with tags like "chartreuse_hair" just use "green_hair". I don't get what you mean. You're still naming more than one color, so it's more than one tag, right?
>>16155 >>16156 >>16157 >>16159 >>16161 Thank you for the detailed answers. It looks like it will take a while for me to get comfy with a tagging workflow, but now I see the path ahead.
>>16162 I am severely autistic. If you want to do anything else with your life, I am not an example to follow.
I'm doing a gallery download from pixiv for about the last 2 days, and I'm getting a lot of files downloading, that turn up "already in db". In other words, they should have been skipped, and not downloaded. I'm guessing this is after the file is hash checked. Instead of "URL recognized:imported...", they are marked with "File recognized:imported...". I think pixiv may have changing urls, and this is why this is happening. I suppose there is no way to get a file hash from pixiv before downloading? Files that I already have are downloading, and should have been skipped. I have seen a few skip with URL being recognized, but very few. Almost all have downloaded, then hash check has shown that they were already in the db. It really slows the downloading, and is a huge waste of bandwidth. Thanks!
>>16164 Also, I have already downloaded a lot of files from pixiv over the last 2 years or so, so I am assuming some of the "file recognized" is the same file already downloaded from pixiv, therefore I'm assuming they change their urls once in a while.
I recently borked something trying to update too many versions at once, yeah yeah I know. Luckily this was a backup drive so nothing was lost(other than my time) but was wondering if there's anyway to look into and undo this and make sure nothing went wrong with my database. Hydrus closed itself after failing to update so I haven't touched it since. Worst case, I'll just have to delete my database and copy over my back up again and I hate that it took me 3 days to copy everything over before. v559, 2024/09/17 19:39:47: hydrus client started v559, 2024/09/17 19:39:49: booting controller… v559, 2024/09/17 19:39:50: booting db… v559, 2024/09/17 19:40:11: checking database v559, 2024/09/17 19:40:13: updating db to v541 v559, 2024/09/17 19:40:13: updated db to v541 v559, 2024/09/17 19:40:15: updating db to v542 v559, 2024/09/17 19:40:16: updated db to v542 v559, 2024/09/17 19:40:18: updating db to v543 v559, 2024/09/17 19:40:19: updated db to v543 v559, 2024/09/17 19:40:21: updating db to v544 v559, 2024/09/17 19:40:22: updated db to v544 v559, 2024/09/17 19:40:24: updating db to v545 v559, 2024/09/17 19:40:29: updated db to v545 v559, 2024/09/17 19:40:43: updating db to v546 v559, 2024/09/17 19:40:44: updated db to v546 v559, 2024/09/17 19:40:46: updating db to v547 v559, 2024/09/17 19:40:46: updated db to v547 v559, 2024/09/17 19:40:48: updating db to v548 v559, 2024/09/17 19:40:48: [[[41, 3, [2, 4, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 46]], [[0, "simple_data"], [0, null]]]]]]], [[41, 3, [2, 4, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 46]], [[0, "simple_data"], [0, null]]]]]]], [[41, 3, [2, 3, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 46]], [[0, "simple_data"], [0, null]]]]]]], [[41, 3, [2, 3, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 46]], [[0, "simple_data"], [0, null]]]]]]], [[41, 3, [2, 28, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 47]], [[0, "simple_data"], [0, null]]]]]]], [[41, 3, [2, 13, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [0, 0]]]]]]]], [[41, 3, [2, 13, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [0, 0]]]]]]]], [[41, 3, [2, 13, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [0, 1]]]]]]]], [[41, 3, [2, 13, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [0, 1]]]]]]]], [[41, 3, [2, 14, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [1, 0]]]]]]]], [[41, 3, [2, 14, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [1, 0]]]]]]]], [[41, 3, [2, 14, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [1, 1]]]]]]]], [[41, 3, [2, 14, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [1, 1]]]]]]]], [[41, 3, [2, 11, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [2, 0]]]]]]]], [[41, 3, [2, 11, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [2, 0]]]]]]]], [[41, 3, [2, 11, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [2, 1]]]]]]]], [[41, 3, [2, 11, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [2, 1]]]]]]]], [[41, 3, [2, 12, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [3, 0]]]]]]]], [[41, 3, [2, 12, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [3, 0]]]]]]]], [[41, 3, [2, 12, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [3, 1]]]]]]]], [[41, 3, [2, 12, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [3, 1]]]]]]]], [[41, 3, [2, 9, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [4, 0]]]]]]]], [[41, 3, [2, 9, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [4, 0]]]]]]]], [[41, 3, [2, 9, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [4, 1]]]]]]]], [[41, 3, [2, 9, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [4, 1]]]]]]]], [[41, 3, [2, 10, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [5, 0]]]]]]]], [[41, 3, [2, 10, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [5, 0]]]]]]]], [[41, 3, [2, 10, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [5, 1]]]]]]]], [[41, 3, [2, 10, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [5, 1]]]]]]]], [[41, 3, [2, 15, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [6, 0]]]]]]]], [[41, 3, [2, 15, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [6, 0]]]]]]]], [[41, 3, [2, 15, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [6, 1]]]]]]]], [[41, 3, [2, 15, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [6, 1]]]]]]]], [[41, 3, [2, 16, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [7, 0]]]]]]]], [[41, 3, [2, 16, 0, [3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [7, 0]]]]]]]], [[41, 3, [2, 16, 0, [2]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [7, 1]]]]]]]], [[41, 3, [2, 16, 0, [2, 3]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 148]], [[0, "simple_data"], [0, [7, 1]]]]]]]], [[41, 3, [0, 97, 0, [0]]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 149]], [[0, "simple_data"], [2, [126, 1, [0, null]]]]]]]]], [[41, 3, [2, 6, 0, []]], [42, 5, [0, [21, 2, [[[0, "simple_action"], [0, 149]], [[0, "simple_data"], [2, [126, 1, [2, null]]]]]]]]]] v559, 2024/09/17 19:40:48: Had a problem saving a JSON object. The dump has been printed to the log. v559, 2024/09/17 19:40:48: Dump had length 4.66 KB! v559, 2024/09/17 19:40:48: If the db crashed, another error may be written just above ^. v559, 2024/09/17 19:40:48: A serious error occurred while trying to start the program. The error will be shown next in a window. More information may have been written to client.log. v559, 2024/09/17 19:40:48: Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 266, in __init__ File "hydrus\client\db\ClientDB.py", line 9830, in _UpdateDB File "hydrus\client\db\ClientDBSerialisable.py", line 747, in SetJSONDump File "hydrus\core\HydrusDBBase.py", line 289, in _Execute sqlite3.OperationalError: table json_dumps_named has no column named timestamp_ms During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientController.py", line 2212, in THREADBootEverything File "hydrus\client\ClientController.py", line 1003, in InitModel File "hydrus\core\HydrusController.py", line 588, in InitModel File "hydrus\client\ClientController.py", line 207, in _InitDB File "hydrus\client\db\ClientDB.py", line 239, in __init__ File "hydrus\core\HydrusDB.py", line 287, in __init__ Exception: Updating the client db to version 548 caused this error: Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 266, in __init__ File "hydrus\client\db\ClientDB.py", line 9830, in _UpdateDB File "hydrus\client\db\ClientDBSerialisable.py", line 747, in SetJSONDump
[Expand Post] File "hydrus\core\HydrusDBBase.py", line 289, in _Execute sqlite3.OperationalError: table json_dumps_named has no column named timestamp_ms v559, 2024/09/17 19:40:48: boot error v559, 2024/09/17 19:40:48: A serious error occurred while trying to start the program. The error will be shown next in a window. More information may have been written to client.log. v559, 2024/09/17 19:40:52: boot error v559, 2024/09/17 19:40:52: Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 266, in __init__ File "hydrus\client\db\ClientDB.py", line 9830, in _UpdateDB File "hydrus\client\db\ClientDBSerialisable.py", line 747, in SetJSONDump File "hydrus\core\HydrusDBBase.py", line 289, in _Execute sqlite3.OperationalError: table json_dumps_named has no column named timestamp_ms During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientController.py", line 2212, in THREADBootEverything File "hydrus\client\ClientController.py", line 1003, in InitModel File "hydrus\core\HydrusController.py", line 588, in InitModel File "hydrus\client\ClientController.py", line 207, in _InitDB File "hydrus\client\db\ClientDB.py", line 239, in __init__ File "hydrus\core\HydrusDB.py", line 287, in __init__ Exception: Updating the client db to version 548 caused this error: Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 266, in __init__ File "hydrus\client\db\ClientDB.py", line 9830, in _UpdateDB File "hydrus\client\db\ClientDBSerialisable.py", line 747, in SetJSONDump File "hydrus\core\HydrusDBBase.py", line 289, in _Execute sqlite3.OperationalError: table json_dumps_named has no column named timestamp_ms v559, 2024/09/17 19:40:54: doing fast shutdown… v559, 2024/09/17 19:40:54: shutting down gui… v559, 2024/09/17 19:40:54: shutting down db… v559, 2024/09/17 19:40:54: saving and exiting objects v559, 2024/09/17 19:40:54: cleaning up… v559, 2024/09/17 19:40:54: shutting down controller… v559, 2024/09/17 19:40:54: hydrus client shut down
I had a good week. I did the background work I wanted to do, and for the release I've got a variety of quality of life bells and whistles to roll out. Nothing huge, but a bunch of little UI improvements. The release should be as normal tomorrow. >>16166 A quick look suggests your update code has hit some bitrot around the switch that moved from second-based timestamps to millisecond ones. That object that couldn't save looks like a shortcut set. Maybe I save a change to a shortcut in the v548 update step, and it is happening in that case before I do the timestamp update. I'm sorry to say this looks like roughly 35% bad fuckin' news. You can try booting into v550 or so--maybe it'll reset your shortcuts back to default or something. Maybe everything would be fine but for that. The sort of things that might be a nightmare, which you should check if it does boot, are your subscriptions, obviously your normal search page session, and your downloader list. Fingers crossed, it just fucked with your shortcuts though. If you aren't sure things are good, go back to the backup. If it is all fucked up, then yeah roll back your backup and try again. Master list as of here https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#big_updates appears to be: 521 (maybe clean install) > 527 (special clean install) > 535 > 558 > 571 (clean install). So I guess try v558 first? If the backup restore takes a very long time, I think for this situation you only need to restore the four 'client.*.db' files. A db update like this doesn't touch your client_files folder at all, so no need to delete and restore that--it should be the same for both folders. You can use a program like FreeFileSync to mirror your install back to the backup folder's state with minimum hassle just by having it check for different file sizes/modified dates. Hopefully that isn't more than a hundred gigs even for a PTR-syncing client. Let me know how you get on!
>>16167 Back, I went and did a clean install of v550 like you said and everything seems to be back to normal. Here's the log file. But yeah, I normally don't do updates from that far head, my fault for feeling ballsy jumping from v540 -->v559. Right now I'm still checking everything and so far everything is still there. My last session loaded up fine, Galley Downloads, tabs, tags, searches aren't giving me any errors, animated gifs and webms aren't giving me any errors, etc. I don't know how deep to look but I'm still willing to do a full back up again and start over if necessary v550, 2024/09/18 01:52:21: hydrus client started v550, 2024/09/18 01:52:23: booting controller… v550, 2024/09/18 01:52:23: booting db… v550, 2024/09/18 01:52:23: Found and deleted the durable temporary database on boot. The last exit was probably not clean. v550, 2024/09/18 01:52:31: checking database v550, 2024/09/18 01:52:34: updating db to v548 v550, 2024/09/18 01:53:00: updated db to v548 v550, 2024/09/18 01:53:02: updating db to v549 v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:09: An object of type String Splitter was created in a client/server that uses an updated version of that object! We support versions up to 1, but the object was version 2. For now, we will try to continue work, but things may break. If you know why this has occured, please correct it. If you do not, please let hydrus dev know. v550, 2024/09/18 01:53:10: updated db to v549 v550, 2024/09/18 01:53:13: updating db to v550 v550, 2024/09/18 01:53:13: updated db to v550 v550, 2024/09/18 01:53:13: initialising managers v550, 2024/09/18 01:53:27: booting gui… v550, 2024/09/18 01:53:28: The client has updated to version 550!
>>16168 Great, sounds good! Keep updating, and those 'String Splitter' errors will be fixed. That's some downloaders that were updated from the >v550 install that v550 doesn't understand--presumably a future update will overwrite them one more time and fix everything. Don't try to download too much until you are updated to at least where you originally wanted to get to.
https://www.youtube.com/watch?v=eYN9WivpQ6M windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v590/Hydrus.Network.590.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v590/Hydrus.Network.590.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v590/Hydrus.Network.590.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v590/Hydrus.Network.590.-.Linux.-.Executable.tar.zst I had a pretty good week and have a bunch of quality of life improvements to roll out. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights The 'check now' button in 'edit subscriptions' and 'edit subscription' is now more intelligent and handles pause status. It'll ask you about paused subscriptions and queries, and it better examines and navigates what you can and will be resurrecting. If you shrink the search page's 'preview' window down to 0 pixel size, it now recognises it is hidden and will no longer load videos and stuff in the background! In the parsing system, the 'character set' choice for a 'String Match' now includes the quick-select options for hexadecimal and base64 chars. When hitting share->copy hashes on a thumbnail, the program now loads up the md5, sha1, and sha512 hashes into the menu labels, so you can now verify md5s or whatever at a glance. I wrote a thing for API devs and any other advanced users interested in hydrus content updates about how the Current Deleted Pending Petitioned model works in hydrus: https://hydrusnetwork.github.io/hydrus/developer_api.html#CDPP . I went into very specific detail because when talking about this with a user the other day, I couldn't remember it perfectly myself. Lastly, I am happy to say that I succeeded in doing, as I planned, some meaty background work. The 'Metadata Conditional' object, which does 'does this file have x property, yes/no?', and which I have been thinking about for a couple years, got its first version, and it went a lot better and simpler than I expected. The whole thing plugs into the existing system predicate system and will share much edit UI with it. The MC will be a key lego brick in the development of the duplicate auto-resolution system, and most of the comparison logic for the first version is essentially done now. I pretty much just have to write the database tables and maintenance daemon, and then some UI, so fingers crossed it won't be too long before I am rolling it out for advanced users to play with. next week I want to add local media file path fetching to the API and do a bit of API permissions cleanup.
>>16170 >The 'check now' button in 'edit subscriptions' and 'edit subscription' is now more intelligent and handles pause status. It'll ask you about paused subscriptions and queries, and it better examines and navigates what you can and will be resurrecting. Noice.
i'm an idiot who keeps adding the ptr thinking it would be nice to have more tags and then removing it when those tags are useless and only on files i already had tags on from boorus. anyways, my database files are fucking huge even after vacuuming them all. is there any way to debloat them?
>>16173 If you want tags, especially for files that are not on boorus, look around for Ai taggers. You'll find a discussion of them a handful of posts above. I've been using an old one that does a pretty good job. It just doesn't do characters (people or anime names and such). However, if you set the threshold to low, it will tell you whether a pic is loli or not (it will tag it loli). This is the link to the tagger I am using. https://github.com/Garbevoir/wd-e621-hydrus-tagger/tree/main
>>16178 AI taggers are the worst at the tags I actually search on so there's no point. I've tried them and it's just the same cycle as me adding and removing PTR. I like having other tags too, but I'm nearly always searching on character, gender, creator, or series. Also, it's a lie that that one doesn't do character tags, it just doesn't namespace them as character tags. Which is honestly worse because not only are they inaccurate, but they're spamming up the wrong namespace too. Circling back to my previous post, I think I just vacuumed too early after the last time I removed it, because I managed to knock off a good 10GB+ by vacuuming again. My DB files are at 19GB now, which still feels too big for the amount of files and how sparsely tagged they feel, but I can tolerate it.
(221.58 KB 450x470 d2zb69ff2.png)

>>16179 >My DB files are at 19GB now Damn. And I was surprised because my DBs were 230 MB.....
(17.10 KB 649x194 20-15:36:10redacted.png)

>>16180 >>16179 Haha how could anyone tolerate having such a large db, that's crazy...
>>16181 is that including files? it looks like it is. files+db for me is 203GB
Where do I look up Sankaku's http header?
(207.61 KB 1666x1603 Screenshot 2024-09-20 214851.png)

>>16183 I'm sorry, I've no idea what I'm doing. I wasn't on beta page. So, now what?
(186.34 KB 1749x1573 Screenshot 2024-09-20 215301.png)

>>16184 Okay, I'm even more of a retard, it logged me out. Finally found the bugger.
>>16182 It doesn't include files but it does include thumbs. i totally forgot about the thumbnails, thank you anon. I'm moving them to a different drive right now until I can get more storage for my PC.
Imageboard parsers should replace HTML entities in thread subjects for the watcher titles.
(20.97 KB 412x86 20-20:29:34.png)

>>16188 Oof, it doesn't actually contain the thumbnails, it's over for me.
I think my db is at about 60gb now.
>>16179 I don't think my tagger does characters. I'll have to take a good look. It might.
>>16154 remember: tags are for searching, not describing. if you're never going to search it, don't bother spending the time to add it as a tag. also, if you're importing files from boorus or using autotaggers, your files are going to have, well, booru-style tags. might as well keep it consistent and use the tags that already exist; why reinvent the wheel? so in conclusion: don't listen to this dude >>16155 also, since you say you're new to hydrus, make sure you read up on tag siblings and parents. those are useful. also take a look at tag display and search, which lets you hide tags. also, under the "tag suggestions" section of the options, you can add tags to the "most used" tab which helps with tagging quickly.
>>16170 >The 'Metadata Conditional' object, which does 'does this file have x property, yes/no?', and which I have been thinking about for a couple years, got its first version, and it went a lot better and simpler than I expected. Oooh, exciting. Thanks for your hard work.
>>15721 Bro your site is so unreadable, it's crazy, I don't know how you do. Why isn't there just some simple indications on how to download the main sites, like download a 4chan thread (conserving the title of the files), 8chan, Danbooru, Safebooru, Gelbooru, Pixiv, a Sankaku Complex thread, a nhentai comic, like open menu > download > paste the adress you want to download the content of > end of story. No, instead we have >network I guess >why not download? >network > downloaders >import downloaders (that's not it) >export downloaders (that's not it) >manage downloaders default options (that's not it) >manage downloaders and url display (why repeat manage downloaders?) (that's not it) (it's just other options) >Let's try import downloader >Lain. Ok, cool? >It asks some pngs >I'll fetch them I guess >find them on Github >they're not needed, they're default in Hydrus now >If those PNGs were needed in previous versions, but aren't needed anymore, why isn't precised in Hydrus like "hey those PNGs are already available, but you may need some custom PNGs to download some sites <3"? WHY. WHY IS YOUR SOFTWARE SUCH A MESS? WHY DOWNLOADING A PAGE BECOMES A FUCKING EXPLORATION RPG? WHY CAN'T WE JUST PASTE THE ADDRESS IN A DEFAULT WINDOW ALREADY PRESENT ON THE LEFT AND DOWNLOAD THE FUCKING PAGE? >muh read the documentationnnnnnnnnnnnnn I'm literally here: >https://hydrusnetwork.github.io/hydrus/getting_started_downloading.html All I see is bullshit >blabla API >a screenshot but you can't click on it >you have to open it in another tab >Once you've opened it in a new tab, the screenshot shows nothing >just a bunch of images >and? >by reading carefully you can see, in small "gallery downloader" >how do we make this option appear? >we don't know >we have to ask on 8chan.moe so that assholes (not you Hydradmin) tell us to read the unreadable documentation Why can't we just paste a url and download its content? Why isn't there on the site a big visible tab >DOCUMENTATION or MANUAL with the big steps explained ? Like >Customize your User Interface >>To change the size of your window, do this >>To make the tag managing window there, do this >Download the content of a page >>To download the content of a page just paste your URL there and click enter WHY WHY CAN'T WE DO THIS? Have you read Audacity's documentation ?
[Expand Post]>https://manual.audacityteam.org/ >Everything is clear, simple and readable Tell me if you need help for I don't know organizing the documentation and the interface of the site. It's absolutely not normal we can't do such a simple action as downloading the pics of a thread in10sec. How do Jdownloader, 4Kvideodownloader work ? You just paste your url, end of story, the download starts. With yt-dlp you hit add then start, and that's it. It's scandalous.
>>16195 Yeah, that sounds cool!
>>16196 LOL!! Need some help? Yeah, this program took me quite a while to figure out how to use, and there are still some dark recesses I have yet to explore. BUT IT WORKS!! Just ask for help here. Devanon is pretty nice, and answers questions on the weekends. Also, sites change, so you might have to find new downloaders, or like I do, tweak old ones to make them work again. Look up hydrus cuddlebear to find downloader scripts. >Page Do you mean an actual gallery page or a single pic? For gallery pages, you want the gallery downloader. Pick your tags, and your site, and set options. Then start downloading. For a single pic, you want the URL downloader. You can also put multiple URL addresses in at once to batch download. To get the downloader options up, hit F9. Under File, Options, you will find a LOT of options to set. Database and Network also have options under them, especially Network. You just have to explore. If you have problems, come back here and people will help you. Just ask.
>>16196 >>16198 Also, this is the manual you are looking for. It's under Help / Help and getting started guide. I think some stuff still isn't in it, you really have to explore and try out, especially since devanon is constantly upgrading and improving everything.
>>16154 Namespaces help control tag suggestion (related tags) and presentation (what tags appear on the thumbnail), but you cannot nest namespaces. Colons (:) in the middle of a tag can interfere with autocompletion, and you may need an extra action to search in all namespaces.
>>16198 >Do you mean an actual gallery page or a single pic? For gallery pages, you want the gallery downloader. Pick your tags, and your site, and set options. Then start downloading. >For a single pic, you want the URL downloader. You can also put multiple URL addresses in at once to batch download. I think the gallery downloader is for converting a search query into a URL using a gallery url generator. The resulting URL is still processed with a page parser and can used with the URL downloader, but the gallery downloader is a specialized tracking UI.
>>16196 >Have you read Audacity's documentation ? <https://manual.audacityteam.org/man/credits.html >The previous Manual for Audacity 1.2 was written by Tony Oetzmann, with major contributions by Dominic Mazzoni. The current Manual builds on that work and has significant contributions by the following: >Gale Andrews - RIP >Richard Ash >David Bailes >Christian Brochec >Matt Brubeck >John Colket >James Crook >Steve Daulton >Scott Granneman >Greg Kozikowski >Leland Lucius >Dominic Mazzoni >Edgar Musgrove >Tony Oetzmann >Alexandre Prokoudine >Peter Sampson >Martyn Shaw >Vidyashankar Vella >Bill Wharrie Nigger, this program is made by one guy, not a team.
>>16150 I think I agree. I'll add an 'empty page' bitmap button or something. >>16152 Great, I am glad you are working again. I think a regular clean install is generally a good idea. I'm not totally sure what you mean by 'get that backup running', but no worries, you know your situation better than me. If you have a backup that has a bunch of spammy folders in it, yeah, delete everything except the db folder I think and do a re-extract. If nothing else, keeping things clean is a good default philosophy. I personally like to run FreeFileSync and then my backup is just a perfect weekly mirror of my main install. >>16158 For a simple user-friendly solution, FreeFileSync has a Linux version, although I have no idea how good it is: https://freefilesync.org/ >>16162 I think I say it in the docs with a fuller explanation, but for my part, my general rules for not going crazy are: 1. Don't try to be perfect. 2. Only tag what you actually search for. Tags are for searching, not describing. IRL for me this mostly means I only manually add creator/series/character tags and a handful of specifics on my personal interests like 'meme:ogey rrat'. >>16164 >>16165 Damn, I'm sorry for the trouble. Sometimes sites just do this, and we don't have excellent solutions. Pixiv is double-difficult because they have multiple files per post, which causes additional uncertainty in our 'have we seen a file at this URL before?' checks (i.e. we may have seen one file, but we don't know if it is the one the downloader wants to get). If pixiv is completely redownloading, that suggests they have changed their actual raw file URLs. In some tests here with some old files in my dev machine, it looks like not only have we gone from https://pixiv.net/member_illust.php?illust_id=XXX to https://www.pixiv.net/en/artworks/XXX (I think this happened a couple years ago), there's a CDN direct file URL https://i.pximg.net/img-original/img/2015/11/24/04/58/13/XXX_p0.jpg turning up that wasn't being recorded previously. That direct file URL looks fairly stable, and while they may have changed it, perhaps hydrus is at fault for not properly recording this previously. Maybe this situation will not be so bad in future, if they change URL format yet again, because the direct file URL will still be able to give a pre-download 'already in db' result. Unfortunately, Pixiv offers no hashes. I'm sorry to say (and probably shouldn't, I think, given how bad my own code is!) their tech has, for a long time, been old/private/weird. I was horrified recently to discover that their main file metadata API suddenly started giving different answers for the 'translated tag' section depending on your language request header, leaving that JSON row absent if you don't send one--giving different API responses to different callers is not typical or helpful. But in recent years they have been updating to modern standard phone-friendly tech, although that has its own plusses and minuses. I'm hoping their API will get some updates, but I also fear they'll roll out some OAuth garbage that blocks us off entirely. We'll see. Anyway, I'm sorry for my part. I hope some better URL tech in future that will better navigate full renames (I have a similar issue with twitter.com to x.com URLs I want to solve). All I can say for now is to be careful, if you can, with super large download pages on content that is mostly old. But this is also, a little, the cost of doing business. Shouldn't be so bad in future, but let me know how you get on.
>>16173 Double check with database->db maintenance->review deferred delete table data. If that is empty and you have vacuumed, then I think you are currently as small as I can currently get you. I believe this means you should have fairly small client.db, client.caches.db, and client.mappings.db but upwards of a 15GB client.master.db. I am very slowly developing recycle tech on the whole database that will be able to recover orphaned definitions, which will be able to cut the master back down too. >>16179 Ah, looks like the deferred delete might have still been doing stuff in the background. >>16189 Thanks, is this with the default hydrus downloaders, and is there an imageboard that does this a lot, like everything have <br> or <p> or something? Can you point me to an example (longer-lived) thread? >>16180 >>16190 >>16191 Yeah if you sync with the PTR, your mappings will blow up to about 65GB and your master to 15GB, although my numbers may be a bit off there. Your client.caches.db will get pretty chunky depending on how many files you have in your actual client (it cross-references the new tag info). The PTR has like 2.3 billion mappings or something now, which your client will download and process into database tables and indices. The ~40bytes per row is a pretty cool ratio overall, but it obviously adds up. I generally think that the PTR is worth it if you have a decent computer and an SSD and hundreds of thousands or millions of files, along with a general tolerance for messy tags. If you want a curated db or need to preserve system resources one way or another, the PTR is not great! >>16196 Sorry it was so frustrating. I'm an unusual guy doing an unusual thing, and a lot of my work is unpolished and not user friendly. Most of hydrus is a sprawling mess of ideas rather than a tight product. For some users, it all clicks, including the documentation, and some others bounce off completely. I am ok with the idea that hydrus is not for everyone, and there are some other programs like Tag Studio https://github.com/TagStudioDev/TagStudio that particularly focus on being prettier than hydrus, which you may wish to check out, but if you would still like to keep with hydrus, I would be interested in hearing more about how you came to use hydrus so I can make the onboarding and help documentation better for the next person in your shoes. Keeping the help updated and useful for new users is always a battle. How did you find about about hydrus? Were you recommended in a thread etc..? What do you think hydrus does, generally speaking? Why do you want to use it? If you were mostly interested in the downloader, did you jump, in the help, straight to the downloading page? Have you tried, say, importing 100 files to the program and exploring how the search works? I generally see hydrus as a file management program before being a downloader, and since the program is complicated I generally recommend people take it all slow. While the downloader is important, I generally suggest that people see if they enjoy the general workflows of managing files before they commit too hard to the program. I suggest, I think in the help, just importing 100 or 1,000 files to the program, when you are starting out, and learn the basics of archiving and searching things like system:filesize. Once users get a feel for managing files and simple local tag editing, then I think it is a good time to explore things like downloaders and subscriptions. If you jumped to downloading first because that was what you were interested in most of all, it sounds like I should add more to that help page about the basics of opening pages, things like the F9 menu. and perhaps I can add a warning at the top of the page to say 'hey, if you skipped to this page, please go back to getting started with files'. By your feedback, I should add more concrete examples of downloading things--sounds good.
(66.02 KB 680x680 fucking normies - reeee.jpg)

>>16196 >entitled brat has a meltdown >really thinks Customer Service owes him something Ahem. Dude, did you know this circus is ran by a single anon, for free?
(70.76 KB 1024x741 sdbvkblz.jpg)

>>16196 Cringe.
>>16205 >>>16189 > Thanks, is this with the default hydrus downloaders, and is there an imageboard that does this a lot, like everything have <br> or <p> or something? Can you point me to an example (longer-lived) thread? My customized 4chan downloader titles at least with apostrophes and ampersands, not ">" https://boards.4chan.org/mlp/thread/41431332 not on desuarchive
(518.25 KB 1240x1428 t u.png)

>>16205 >I generally see hydrus as a file management program before being a downloader, This. I use Hydrus exclusively off-line and is a blessed piece of software. Thanks.
>>16194 >>16201 >>16204 >Tags are for searching, not describing Perfect. Somehow along the way I got lost in the mindset of getting every single descriptor tag down, but it doesn't make any sense in this context. Ty!
>>16210 Yeah, tags are definitely far more for the search. You don't really care usually if some pic has a black bowtie. But you might be searching for a pic that you remember the character was wearing a black bowtie and holding a chalice. And when you have over 5 million pics... This is a program for organization autists. If you want to just download a pic, I recommend something like gallery-dl.
>>16210 >>16211 And the other thing I use this program for is mass importing of files for tagging (I use an auto-tagger), and mass downloading from boorus with tags. And after I'm done, everything is all nicely organized in it's own viewer. Very Nice!!
>>16204 > FreeFileSync has a Linux version, although I have no idea how good it is as far as i can tell its as good as the windows version, and donation edition transfers. my only complaint is that it doesnt make a sound when its done anymore, but i may have missed that option >>16205 > I am very slowly developing recycle tech on the whole database that will be able to recover orphaned definitions, which will be able to cut the master back down too. awesome, thanks! I've been using hydrus for something like 4 years now and I've been lugging around ghost ptr bloat for most of them, so I can wait until then to get my 10gb or whatever back. thank you for your hard work as always
I can never find a straight answer for this. Does Hydrus supports ugoria from Pixiv or not? I see its listed as supported but whenever I try to download ugoria off Pixiv, my gallery downloader says "vetoed" and skips it. That place is such a hellish site for scrapping off of. That site can make any scrapper software or script a complete mess to work with.
>>16214 Ugiora's aren't supported, but Hydownloader and gallery-dl can convert them into mp4s which hydrus can use. I think ugoira support is in the cards but it's not in hydrus yet.
>>16170 >590 "share" in the context menu is missing "export".
>>16170 Maybe "files" should be called "file service". It is no more about files than "1 png, 127 KB", "open" or "share".
>>16133 ty >>16146 i would really love the tooltip option, i prefer having ISO but having both is very nice
I had a mixed week, but I got a couple of neat things done. I fixed a handful of bugs, wrote a 'tag actually exists here' filter for tag siblings/parents export, and added local file/thumbnail path fetching to the Client API. The release should be as normal tomorrow. >>16216 Thank you, fixed for tomorrow!
(74.35 KB 1280x720 489526.jpg)

>>16196 >typical modern faget blaming others for his own shortcomings >millions of them everywhere expecting the world to cater their whims Positively this is the result of a home without a father. You have to go back to Reddit.
How do I lock tags for a file once I'm sure it's definitive so that new imports from boorus don't fuck with it? Is creating a separate tag service the only solution?
>>16222 Afaik you go to network -> downloaders -> manage default import options. Then either you set one of the two defaults for watchable/post url on top right of the window, or you double click on an entry in the list and set custom settings for this entry alone, let's say "danbooru file page". There you activate or deactivate the checkboxes that you need like in the image of following post: >>15842
https://www.youtube.com/watch?v=1ySAkB-H33E windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v591/Hydrus.Network.591.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v591/Hydrus.Network.591.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v591/Hydrus.Network.591.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v591/Hydrus.Network.591.-.Linux.-.Executable.tar.zst I had a bit of a mixed week, but I got a couple of neat things done. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I fixed a stupid mistake in last week's 'move left/right after page close' thing, which was doing it even if the page being closed was not the current focus! Also fixed is the share->export menu not showing if you only right-clicked one file. The tags->migrate tags dialog can now filter parent and sibling tags based on tag count. You can say 'only include pairs where the A/B actually exists on service x', where for siblings that B is the ideal tag at the end of the chain. I hope this makes it easier to filter giant multi-hundred-thousand pools of pairs, like from the PTR, down to only what matters for your 'my tags' etc.. The Client API has a new simple permissions mode for access keys, a convenience state just called 'permits everything', which does exactly what you think and is intended to be an easy catch-all for people who just want to turn on future-proofed access for an app they trust. On update, any access key that looks like it was told to permit everything in the old system will be updated to this mode (you will get a popup saying this happened, too). If you do need finer control, you can still tweak individual Client API permissions under services->review services. Relatedly, the Client API gets a new permission this week, 'see local files', and commands to fetch local file and thumbnail paths. If you want to access files directly on the local machine for metadata analysis or whatever, this is what you want! next week I'm sorry to say I have been unproductive recently, so I'll keep it simple. I think I want to try some duplicate auto-resolution database stuff. I still need to think about a couple things, so I'll play around a bit and see what makes the most sense. Fingers crossed, I'll have the skeleton of the maintenance daemon sketched out.
>>16221 >a home without a father Is that the meme of the week or something?
>>16225 "fatherless behavior" has been a normalfag meme for over a year or more now. And also a minor nig meme. This is just a variant.
(368.08 KB 1200x1200 POS.jpg)

>>16225 Do not take it lightly, single and divorced mothers are a civilizational problem and revoking the emancipation of women is long due.
>>16224 Wow! I've been waiting for that relationship filter feature for over 3 years! I didn't expect it to just be added out of no where thanks a bunch! If I could ask for a small (I assume it's small) option though, I'd like it if the "only" checkboxes could have an "OR" option or something like that. Basically what I mean if that I'd like the siblings or parents to be added if either one side or the other has counts, rather than it needing to be both when I check both boxes. Or is that already how it works? I just wanna make sure before I pull the trigger. but anyway, I'm really glad this is finally here!
>>16224 I appreciate the "for convenience, I moved stuff around in the access permissions" on upgrade to 591, both the action and the warning so I'm aware without reading the full changelog. Thanks for treating us so well!
>>16228 >Do not take it lightly, single and divorced mothers are a civilizational problem Who is picrel for? > In all OECD countries, most single-parent households were headed by a mother. The proportion headed by a father varied between 9% and 25%. It was lowest in Estonia (9%), Costa Rica (10%), Cyprus (10%), Japan (10%), Ireland (10%) and the United Kingdom (12%), while it was highest in Norway (22%), Spain (23%), Sweden (24%), Romania (25%) and the United States (25%). These numbers were not provided for Canada, Australia or New Zealand. https://www.oecd.org/content/dam/oecd/en/data/datasets/family-database/sf_1_1_family_size_and_composition.pdf > and revoking the emancipation of women is long due. In Russia, fatherless families are more likely to be ruled by the mother's parents, and old women are often sadomasochistic slavers who love strong men, illusory rules and hierarchy, and know that they will not be imprisoned or sent to war. If you are talking about the right to vote, guess what.
>>15916 >>>15901 >>The derpibooru downloader downloads descriptions without the links in them. :( >I will check it out. I don't know how this thing works, but if it is just pulling the visible 'text' of the html, and the URL you want is in <a href="xxx">, it may be tricky to get that in a neat way. The hydrus note tech is only plaintext for now, so no proper rich text or links or anything yet. The example urls in the "default" one have no interesting descriptions. Here is one with a link and a quote (the ">" becomes an entity): https://derpibooru.org/images/2680312 The link is preserved. The quote is not. I may have been looking at a description fetched from a mirror that had lost the link. If there is still a reason, there is a hidden form that contains a textarea with the Markdown source of the description.
>>16236 >with the Markdown source of the description. with entities for ">", at least
>>16236 >The link is preserved. The quote is not. Sorry, I meant the leading ">".
tangentially related to Hydrus but Mozillia recently announced renewed interest in adding Jpeg-XL (JXL) support to Firefox, thanks to a new rust-based library for it that a Google research team is developing for them (crazy, I know). also the JXL website got an overhaul and looks really nice, even including a direct jab at AVIF.
>>16236 Sorry for rushing and not reading or thinking. It is indeed about things linkified with href. Another issue is that images included in the description leave no useful trace. An image included from any site directly like in https://derpibooru.org/images/3448782 is ignored completely. An image included from derpibooru itself like in https://derpibooru.org/images/3450178 is replaced by "This image is blocked by your current filter - click here to display it anyway your current filter.", although there is no filter that should block it.
>>16240 >An image included from any site directly like in https://derpibooru.org/images/3448782 is ignored completely. and note that the image is included using derpibooru's proxy, and original url appears in the invisible Markdown, but not in the HTML.
>>16239 >google developing a rust library for mozilla to use >same google that has been stonewalling jxl >same google that made jxl ???? I know google is big and all but I would have thought that the teams would at least appear to be in agreement in public.
>>16242 This is how I understand the situation: The "research" team is very much a fan of JXL and wants to see it adopted. The chrome team doesn't want JXL because (I think) they don't want the maintenance burden and don't care about the benefits of a new format. The problem is that the chrome team has much more clout with the heads of Google than the research team does, so they usually get their way, and they did with Chrome removing JXL support. The research team still believed in JXL even after Google as a whole was no longer backing it, so they asked if they could write a new library for JXL in a memory safe language (Rust) to hopefully reduce the maintenance burden and security risks of supporting a new image format. The Google heads said that they could, but that Chrome still won't support it even if they do, and they're only allowed to work on it if some other meaningfully significant project would use the library. The team knew that Mozilla was somewhat interested in adding JXL support to Firefox, but that Mozilla was apprehensive about fully committing to it, because of the security risks of merging a large library written in a memory-unsafe language (C++) into their flagship browser's stable release. So the research team contacted Mozilla to see if they were interested in Firefox being the project that the research team needed to get the go-ahead from their higher-ups, and Mozilla agreed. So here we are. source: a bunch of reading on different websites about the topic that I didn't save the sources for because I'm not a journalist. I wish I did save the sources though. It's an interesting situation.
>>16241 It is very simple and the ">" appears as ">". [30, 7, ["description from Markdown", 18, [27, 7, [[26, 3, [[2, [62, 3, [0, "textarea", {"id": "description"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 1, "href", [84, 1, [26, 3, [[2, [51, 1, [2, "^((?!No description provided\\.).)*$", null, null, "pony"]]]]]]]], "description-md@derpibooru"]]
>>16245 ponerpics uses a different syntax https://ponerpics.org/images/7063333?q=blood+cell the twibooru page does not have any source code in it https://twibooru.org/3344908
>>16243 Appreciate the run down, I was wondering what all the google drama about JXL was. -t not >>16242
>>16242 >firefox deving >google salary Living the (professional browser dev) dream.
>>16249 >google salary That is the TOTAL dream.
>>16208 Thanks, this makes sense. Rather than trying to fix each watcher parser, I should just hardcode a rule for subject parsing to clean up any html garbage. >>16214 >>16215 The straight answer is: not really, but maybe in the future. The technical problem is that Ugoira is, fundamentally, a list of pngs/jpegs (often stored in a zip) and then some frame timing information (either transmitted as JSON or hardcoded into a javascript viewer). I forget the details of Pixiv specifically, but I think they'll let you download the raw zip, but then we'd need to figure out some hardcoded bullshit where we download the JSON/javascript and convert to a .json that we then embed in the zip. It isn't impossible, but it is a pain in the ass. A user was looking into it, since I was taking so long to get to it, and his feedback was not overly enthusiastic. iirc there seem to be even more pains in the ass on the Pixiv side; it really is an unusual site. Also, to be straight, hydrus does recognise an ugoira (it is under the 'animations' filetype), and it'll give you a thumbnail, but it won't render yet, and we don't have json timing parsing yet (the json-inside-the-zip part is unofficial, not part of the ugoira standard, and though I have seen it done elsewhere, we'd ultimately be inventing our own proprietary standard here). The danbooru-style 'use a hardcoded script to convert it to an mp4' is not a bad solution, and I see why they go for it. >>16217 Good idea, I'll have a think about this. I have also been using 'locations' as the word in code more often. >>16229 Does running the job twice, once with the first guy checked, and then with the second one checked, do an OR for your case, or is your workflow a bit more complicated and it needs to be done at the same time? You are right, if you check both, I'm pretty sure it does 'gotta be count on both sides'.
>>16254 >Does running the job twice, once with the first guy checked, and then with the second one checked, do an OR for your case Well I think not because if you run it on a tag with (for example) checking old siblings for counts, and an old sibling doesn't have counts, then that relationship is discarded, so when you check it the second time for new sibling counts, that relationship won't be there now for Hydrus to say "oh the new sibling has counts, so let's keep this one" right? I'm not sure if I understand what you mean though. My workflow is just that I used to sync with the ptr and I want all of the relationships that are relevant to my local files to be kept and the rest discarded. by "relevant" in this case, I mean if any of the tags on either side of a relationship exist in files I have, then it should be kept, but if the entire relationship is only tags that don't exist for me, then it should be discarded. Does that make sense? Actually now that I think about it, are you asking if doing a migration from the same archive twice would work, once with the checkbox checked for one side, and once with the checkbox checked for the other, but both times on all tag relationships in the archive? If so then I think that would work, as long if it'll migration the relationship regardless of which side has the counts. I'm not sure if a double migration is safe though, and I don't wanna do something that could clobber my db, especially since if it does clobber it, it'll be the relationships which is something that I might not notice for a while. But since my case is simple (I think it is anyway) maybe it's safe. I'm just worried about a subtle breakage.
>>16236 >>16240 Thanks, I understand better now. I think I get slightly different parsing results to you; perhaps that is due to your using a real login and me being a guest? Maybe your user account has some note display options or something. picrel is what I get for all three files. I see the markdown in the hidden form you are talking about. I am not sure if a typical/default user wants the markdown stuff, since having just the URL may be nicer, in raw text form, than "[https://www.kickstarter.com/projects/partylikeanartist/my-little-pony-fan-made-charity-coloring-book](https://www.kickstarter.com/projects/partylikeanartist/my-little-pony-fan-made-charity-coloring-book)", but I can see that the current parsing does miss embedded image URLs and stuff. Maybe I should add a second note parser, or maybe I should make a second derpi parser that grabs raw markdown notes. I think probably the former, even if it is spammy. Spammy notes is a larger problem than this that I need to come up with a nice solution for otherwise. Looks like maybe I should just fold this into the default?: >>16245 Let me know what you think I should do here! >>16239 >>16242 >>16243 (Thank you!) This makes me feel good/hopeful. JpegXL is my personal favourite of the new standards, and I hate how it has been sidelined. If we can get actual support in a big browser, then that's a potential first domino to everyone else following, just like webm was on /gif/ in the old days. I'm sure there are still many ways this could be scuttled, but let's keep our eyes on it. As soon as JpegXL support becomes real for PIL (e.g. https://pypi.org/project/jxlpy/ becomes more real and/or is folded into the main branch), I'll add it to hydrus in a week. Thinking about it more, I can appreciate more google's reluctance to add a new attack surface by integrating this. I'm sure the recent webp stuff has only made made them more frightened. BUT, I'll say, you don't build a ship to keep it in port--we need new image formats, so it sounds like we need a group of really clever guys to figure out the problem as best they can, I wonder if google knows anyone like that. Good on the research team, if so, for continuing to push.
>>16255 Thanks, your workflow makes sense. Yeah, I was thinking in terms of copying to a new service. I will add a checkbox for OR!
>>16257 great! thank you!
>>16256 >>>16236 >>>16240 > Thanks, I understand better now. I think I get slightly different parsing results to you; perhaps that is due to your using a real login and me being a guest? Maybe your user account has some note display options or something. picrel is what I get for all three files. Maybe I meant that the blockquote stops being a blockquote and is turned into a paragraph. Everything else seems like what I'd seen earlier. >I see the markdown in the hidden form you are talking about. I am not sure if a typical/default user wants the markdown stuff, since having just the URL may be nicer, in raw text form, than "[https://www.kickstarter.com/projects/partylikeanartist/my-little-pony-fan-made-charity-coloring-book](https://www.kickstarter.com/projects/partylikeanartist/my-little-pony-fan-made-charity-coloring-book)", That's either how the user wrote it, or leftovers from some old bug. Normally Markdown links look like <http://example.com/> or [example site](http://example.com) >I think probably the former, even if it is spammy. Spammy notes is a larger problem than this that I need to come up with a nice solution for otherwise. Maybe if middle-clicking a note's title asked if you want to delete it? >Looks like maybe I should just fold this into the default?: >>16245 That seems to be working well. Consider the note name though: the window is small. The default ponerpics parser does not get a source or description.
>>16257 The confirmation window and progress popup do not mention the new filter.
>>16259 Thanks, I will roll that extra note parser into the defaults. There is 'note import options' that let users control which notes are parsed, but tbh that panel is overengineered and pretty garbage from a user-friendliness perspective, so that's another thing to think about overhauling here. I'm afraid I don't think I have a ponerpics downloader in the defaults, so did that come from the repo at some point? https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders although I don't see it there--unless ponerpics is a synonym for ponybooru or something? Is it this one: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/blob/master/Downloaders/ponibooru-tag-search-2018.11.29.png ? I'm afraid I just don't know much about this community--if you know what to do, please feel free to update that downloader and submit a pull on the repo to update it there. >>16260 Damn, thanks. I forgot to write the text for them!
>>16261 Then I probably made the ponerpics downloader myself and forgot and made a "customized" copy.
>>16262 It's https://ponerpics.org and may be parseable by derpibooru parser.
(199.73 KB 1881x457 derpi-vs-ponerpics-parser.png)

>>16261 Here is the difference between my customized derpibooru parser and the ponerpics parser.
>>16147 >If you have the page in a session backup (pages->sessions->append session backup): Session saving seems to be undocumented. Why can only pages of pages be saved?
>>16121 >is to have a new 'metadata conditional' object that allows you to deeply customise comparison scoring That's great. I have some gif files that may seem a little sharper than webm files, but their larger size is worse and not better contrary to what Hydrus thinks. Also, it doesn't know that psd might contain more information than the corresponding png.
>>16266 >Also, it doesn't know that psd might contain more information than the corresponding png. They are related alternates, and the UI says they are pixel-for-pixel duplicates, the file format line looks unimportant.
I think i got Hydrus mostly working, but i cant download 8chan threads at all. Like 4chan via url download works fine but any 8chan urls nothing downloads... error log says 'Looks like html' Help anyone?
>>16268 Did you import your cookies?
>>16269 No. I had no idea that was needed. Is that explained somewhere in documentation?
>>16269 Found out the explanation in the docs. Exported all my cookies as the netscape txt file, imported the whole bunch suscefully... ...and 8chan still doesnt download
>>16270 not fully, but just as an aside here https://hydrusnetwork.github.io/hydrus/getting_started_downloading.html#logins It's needed because of this site's blocker warning page that pops up.
I had a great week. There's new parsing tech, a bit more on local file locations in the Client API, and some UI quality of life work. The release should be as normal tomorrow.
If I auto-tag an image using deep danbooru (the web UI), is there a quick way to select all tags and paste them into my hydrus tag window?
https://www.youtube.com/watch?v=ldrcOdS8oAU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v592/Hydrus.Network.592.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v592/Hydrus.Network.592.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v592/Hydrus.Network.592.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v592/Hydrus.Network.592.-.Linux.-.Executable.tar.zst I had a great week. There's a mix of all sorts of work. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights The tag autocomplete widgets on search pages now have a one-click 'clear page' button. Also, tag autocompletes will handle ctrl+c/ctrl+insert a little more intelligently, copying your typed text if any is selected and otherwise copying from the results list below. Last week's tags->migrate tags pair filtering in gets another checkbox, for 'A or B'. The parsing system has a new formula type, NESTED, which holds two sub-formulae, passing the results from the first to the second. If you need to parse some HTML inside some JSON, or vice versa, I hope it is now easy! I renamed COMPOUND formulae to ZIPPER and updated a little of the help around here. String Processors also now all have copy/paste buttons, for easy migration. The Client API gets another 'local path' command to fetch all the local file storage locations. If you need to hit up the whole filesystem for mass conversion or duplicate checks or whatever, you can now do it without the overhead of the individual file fetches. I updated some library versions in the build and advanced venv scripts. If you run from source, I believe the program will no longer run on Python 3.7 (although I wouldn't be surprised if that were already true). As I understand, Win 7 can only run Python 3.8 at the latest, so we may be running out of track on that front. If you are an advanced source user and want to help me with a simple 'future' test, please check out the changelog. next week I didn't get to working on duplicate auto-resolution, so I'll try that again.
Is it possible to add a "-futanari" to every subscription?
>>16280 >The parsing system has a new formula type, NESTED This is another feature I've been really waiting for! Amazing! I just updated to try it out and make that change with the downloader that I've been wanting to do, and it worked great. It did exactly what I hoped it would and now the notes are nice and clean! You're on a roll recently! I see that you added the "or" option for migrating tags. I haven't used it yet, but it sounds like that's what I was hoping for. Thanks!
I notice an inconsistency between tag searching in "manage tags" dialogs and search pages. I have a tag (A) that's siblinged to another tag (B), but a tag that starts with the same characters as all of the characters in tag A (C) is siblinged to another tag (D). in the manage tags dialog, if I type the full tag A, the result tag (the one suggested and highlighted that gets added when you press "enter") is tag B, as I'd expect. But for some reason on file pages, when I type the entire tag A, it instead redirects to tag D. tag D has more counts in my files than tag B, so I'm guessing that because tag C starts with all the same characters that I typed in, it goes to tag D instead of tag B, because tag D is more popular, but this is confusing and unintuitive. I expect that if I type a full tag, that I'll get the sibling of that tag, regardless of other tags that start with the same characters. Is this inconsistency between "manage tags" and file searching intended behavior? If so, is there an option to change file searching to act the same as "manage tags" somewhere. If there is, I can't find it. both the "manage tags" dialog and file search have "all known tags" as the file domain, so the inconsistency isn't caused by differing domains.
>>16281 Just put it in your blacklist.
Hello! In the process of trying to solve an annoying issue, I seem to have inadvertedly caused a bigger one. I have been, on-and-off over the past few weeks, finding broken files whose thumbnails work, but when I try to full-view them, give me a "this file cannot be displayed" error. I thought to correct it, by telling Hydrus to do a sweep through the whole archive and remove and, if possible, re-download any files found to be missing or incorrect. I imagine these broken files to be resulting from a faulty drive issue I've had many months back. The job seems to have mostly succeeded, with the relevant files being effectively re-downloaded and functional in the file viewer, but I found that in a few particular cases, namely pages from a sadpanda artbook, I now had two files, the old, broken view thumbnail-only file with all the regular tags, as well as a new, seemingly healthy file which nevertheless only had a page number tag. However, since the number of such cases was small, 3 or so, I was able to just manually retag them with the original broken file's tags. The current issue I seem to run into, however, is that upon attempting to run a duplicate check after the fact, I am now being hit with a dozen or so file missing exceptions, and the duplicate view itself seems to be not terribly happy with me. These file missing exceptions, continue to crop up, as we speak. How should I proceed from here?
>>16289 Not really an answer to your question but I would check your hard drive immediately. Also check if the wires are properly seated.
I would appreciate it if OR predicates on search pages were displayed differently. Currently they're displayed as tag 1 OR tag 2 OR tag 3 OR tag 4 OR tag 5 But the problem with it is that predicates any larger than 2 or 3 tags will simply be cut off, and you won't be able to see what's inside without opening the edit window. I think that larger OR predicates would be a lot more readable if they were instead formatted similarly to normal AND predicates, like this OR: tag 1 tag 2 tag 3 tag 4 tag 5 They're simply indented slightly under the OR as a block. This way, all of the tags are readable. As a bonus, it'd probably be easy to remove tags from the OR by double clicking them or pressing "enter" just like with ordinary tags in the search. At the very least, I'd like an option to display ORs this way. It'd make searching easier for me.
As a downloader, how is Hydrus on privacy? For example, if I were to create an account on a booru behind a VPN and using Mullvad Browser, no one would know who I was. But if I were to log into it with Hydrus and download files afterward, how fingerprintable would this be?
>>16292 As long as you have your VPN on, it would look the same as if you were manually mass downloading, I assume. Keep in mind Hydrus subscriptions are automatic, so if you use those, they'll keep going if you turn off your VPN or change locations.
>>16265 Yeah, I'm sorry, this has always been a part of the program that I hacked together, and once I got 'last session' working well, I never really developed other session saving into a nice system. It can do some things, but it is generally a mess with no auto-saving support and trying to save/restore in-progress downloaders is just a pain. I built it around page of pages to try and keep things in silos. In secret, the whole notebook of pages is actually one big page of pages internally, just with the top tab hidden, so that's how the system sees the overall session too. I am not sure what to do. Good start would be just writing some documentation, like you say. Not many people use the session saving system, so if we get some more use into it, we can iterate on it better and smooth out some of these rough edges. >>16266 Yeah, everyone has quite differing opinions and needs for duplicates based on their own collections, so my aim for automatic resolution and general 'here's the difference between A and B, I think you want to keep A by a score of 17' stuff is that we go much more user-customisable with some good defaults. >>16268 Sorry about this. The click-through here got more strict in the past couple of months, and they laid it on their API access too. I figured out a fix when it first happened, but then they moved to dynamic cookie names, which hyrus's login system isn't clever enough to deal with. I guess they got some automatic DDoS or something and had to implement a cleverer block. Have to go with a browser-login-copy solution for now. >>16279 I'm not familiar with deep danbooru to talk about that end, but for hydrus, you can just click the paste button in manage tags and any newline-separated tags will be added in an 'add only' way (i.e. they won't try to repeal any conflicting existing tags). >>16281 >>16288 If this would work for all downloaders, hit up network->downloaders->manage default import options and change your 'default for file posts', the 'tag import options' to blacklist futa. All downloaders set to use the default will use that set when they work. If you need it just for subscriptions, then go into manage subscriptions, select them all, and hit 'overwrite tag import options'.
>>16285 Thanks, this is interesting. I think you are correct that the search page is doing it by count, ultimately. The search page is in what I call the 'display' tag context, whereas the manage tags dialog is in the 'storage' context. Display doesn't show anything non-ideal but has to match sibling data from what you typed to the resultset, whereas storage shows everything and mostly just shows sibling data as a decorator. So I think the thing here is mapping your A and C to their B and D and then going: 'ok, of the stuff we typed in, does anything match it exactly?' 'no I don't see it' 'ok, of the stuff we typed in, what's the max count?' And of course in the storage domain it actually would have the A itself and recognise it. I will see if I can wangle this to inspect the sibling matches when it does that first test. >>16289 Ok, I think what happened is: the 'redownload any missing files' job did not know it was unsuccessful and so did not know to remove the record. If the known URL for a missing file now points to a new file, I guess it would import that as a new file and not fix the missing. I've never encountered it before, but I presume sadpanda regenerated some jpegs, or the downloader you are using got a higher or lower quality version, or CloudFlare decided to give you an 'optimised' version of the jpeg, something like that, and so hydrus thinks the file is novel. If there are only 3 (or a dozen) of these pairs, then first I think you should try to load them up and select them in pairs and go right-click on the new file->manage->file relationships->set a relationship->set this file as better. This should transfer a bunch of metadata, like those tags, over, without needing to 'view' the missing file. Then we still need to deal with the missing files. In general, hydrus expects it to have all its files, so if there are missing files, then as you've seen the duplicates filter will just sperg out. Hit up the database->file maintenance dialog again and queue up a job for 'if file is missing, remove record (leave no delete record)'. This will clear them out and allow them to be re-imported if you happen to stumble into them by accident elsewhere one day. Let me know how it goes! >>16291 Fantastic idea! I actually have most of the tech required for this lined up ready to work, I think, so I'll have a play with this. >>16292 In general we are very barebones. The request headers from hydrus just say 'hey I am called hydrus', and if you want you can override that under network->data->manage http headers. There's no individualised region/language or voluntary fingerprint stuff, and if you ran that same client on a completely different computer I think the request would pretty much, outside of deep technical stuff like 'oh Linux prefers to use packet lengths of xxx', look exactly the same. I do suck up and return any cookies the site wants to set, so normal tracking stuff like that that persists from session to session is repeated, although we don't hit all the advertising iframes that work on subdomains and things, nor do we run the local javascript that might tell advertisers stuff, and all my cookies are generally siloed on a per domain basis just for technical reasons, so I imagine hydrus would look a bit weird/half not there, if someone were to examine the logs. The biggest leak here would be the login itself. If you use your 'xXxsephirothxXx' login in hydrus, either with hydrus's internal login system or using something like Hydrus Companion to copy your cookies over, then that account would be accruing bandwidth and being logged as hitting URL x y and z. If you create the account using a VPN and secure browser and it is called '45145654eu624oe6u5o4e6' or something, then that throwaway account is doing your hydrus stuff and you're only leaking what your private browser leaked when the account was set up, and I think you are good. I recommend using hydrus with throwaway accounts exactly in this sort of way, since if hydrus is falsely seen as a full-site spider or you get profiled for using bandwidth in a weird way, it won't affect your real accounts. Hydrus doesn't phone home for anything, and there's no like central repository of hydrus installs and unique ids. I don't know who you are or how many clients you have or what they are doing. If someone was actively tracking and looked into it, I think they'd see 'ah, this "hydrus client" is back again, it tends to hit the first page of the "bikini babes" gallery search every four or five days and sometimes then hit the first three results, how interesting'. If you want to check for yourself, try opening a random parser in downloaders->downloader components->manage parsers and then paste https://myhttpheader.com/ into the test url and hit 'fetch test data from url'. This is ugly to work with, but it is a simple way in the program to see what the program sees when it downloads something. That site tells you your http headers. Scroll down and compare to the same page in your browser and you should see what hydrus is sending. This too lol: https://myhttpheader.com/
Edited last time by hydrus_dev on 10/05/2024 (Sat) 21:38:40.
>>16295 >Hit up the database->file maintenance dialog again and queue up a job for 'if file is missing, remove record (leave no delete record)'. This will clear them out and allow them to be re-imported if you happen to stumble into them by accident elsewhere one day. >Let me know how it goes! I have done so, and the immediate result is that the duplicate finder appear to be working correctly again, thank you! The affected files did seem to all come from panda pages, assorted booru images that had been redownloaded all seemed to work as expected, with the appropriate tag applied, etc. Thank you!
>>16295 >Fantastic idea! I actually have most of the tech required for this lined up ready to work, I think, so I'll have a play with this. I second this request, i wanted that too, it's a cool idea. I also would like to drop some idea and report a typo. -The typo (see image) is in the "all known files with tags" location and PTR tag domain, when checking the hashes of the red hydrus thumbnails. Instead of sha512 it says md5 for a second time. the tooltip though is correct as you can see and says sha512. For local files it says sha512 correctly, also when it is a deleted file with blurry thumbnail. -Some idea: would it be possible to let a page of pages appear different than normal pages? My idea would be to brighten/darken a page of pages tab 10-20% or whatever you feel is good (like how the animation bar changes color at a certain percentage when you stop a video), so you can distinguish them instantly. If there is a page of pages inside a page of pages, just brighten this tab up again, and so on. Sometimes im like "oh this page is actually a page of pages, i forgot about that, let's see what's inside". Not sure how it would interfere with Styles though. If you brighten it up automatically, it wouldn't interfere with Styles i would assume and they wouldn't need to be changed by the creators. Also you could just add checkbox into the gui or gui pages options, which would activate/deactivate this for people who want to decide for themselves.
>>16295 in "edit duplicate merge options", the windows opening upon double-clicking a line should start with the correct action pre-selected.
Can Hydrus update tags from a page without re-downloading the file?
>>16288 >>16294 When you are adding a blacklist, you are changing the tag settings, so make sure you choose where tags should go, or the downloaded files might not have tags put in the right services.
>>16305 If you give hydrus a URL that it already recognizes as corresponding to a particular file (such as a booru url) by default it won't redownload the file, only the webpage. For example: You give hydrus this url. https://gelbooru.com/index.php?page=post&s=view&id=10806428 It downloads this file. https://img3.gelbooru.com/images/c1/1a/c11ac0e6c0e626e1f0b6f6d9c920fa00.png Then, some time passes. You give hydrus this url again. https://gelbooru.com/index.php?page=post&s=view&id=10806428 Unless you change certain default download settings, it recognizes the url. It won't re-download the file, but it will update the tags.
>>16308 I hadn't found a way to make it update the tags before I posted that (I tried "do not check"), but I just noticed two "force page fetch" checkboxes in import options -> tags:
I had a good week. I made some quality of life improvements to the tag autocomplete and fixed some bugs. The release should be as normal tomorrow.
What if you could get a tag list sorted by the total size of files in each tag?
https://www.youtube.com/watch?v=OPDSEcIbeEE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v593/Hydrus.Network.593.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v593/Hydrus.Network.593.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v593/Hydrus.Network.593.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v593/Hydrus.Network.593.-.Linux.-.Executable.tar.zst I had a good week. There's some fixes and tag autocomplete quality of life. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights OR search predicates are now multiline, one sub-predicate per line. They look a lot better! They also have better 'copy text' support--they'll refer to and copy all the sub-predicates separately, and do subtag conversions correctly. Search pages' tag autocompletes will now better recognise if you type in a 'worse' sibling exactly. If you have 'lotr' siblinged to 'series:lord of the rings', typing 'lotr' will now promote the 'series:lord of the rings' tag to the top of the results, regardless of count. You might get a yes/no dialog on update regarding your Client API permissions. I screwed up the 'should I do it?' test in the 'permit everything' permissions update a couple weeks ago, so if you were affected, it will ask if you want to re-do the update with fixed logic. I fixed the 'source time' parsing for gelbooru and a bunch of gelbooru-engine downloaders. I'm not sure if something changed on their end or ours, but it should work better now. I had success working on the duplicate auto-resolution database module this week, making a skeleton and then fleshing it out a bit. I feel really good about it. I estimate it to be 25-33% complete while the object work I previously did is 50-75% and the UI, not yet started, 0%. So, there's a good bit to go before we are 1.0 here, but all the obvious technical questions are answered and I see the path now. next week I want to focus on my github issues bug reports.
>>16312 >OR search predicates are now multiline, one sub-predicate per line. They look a lot better Noice. >Search pages' tag autocompletes will now better recognise if you type in a 'worse' sibling exactly. If you have 'lotr' siblinged to 'series:lord of the rings', typing 'lotr' will now promote the 'series:lord of the rings' tag to the top of the results, regardless of count. Noice.
>>16271 Figure this out? I'm got the same problem.
(598.44 KB 1366x768 Screenshot_20241010_080928.png)

>>16312 >OR search predicates are now multiline, one sub-predicate per line. Awesome. Thank you!
Is there a way to exclude certain files from being checked by the duplicate filter, or to set custom settings on them? I've decided to import my games screenshot folders into Hydrus for easier perusal by game, and especially with some 2D ones (or game statistics screens), there's quite a lot of frivolous inter-duplicates. I can filter them out with -screenshot, of course, but I'd prefer it if the duplicate checker wouldn't check them at all.
Hi! I updated from 588 to 593 today and a couple of messsages popped up, all containing the same information: v593, win32, frozen IndexError list index out of range Traceback (most recent call last): File "hydrus\core\HydrusPubSub.py", line 140, in Process callable( *args, **kwargs ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3677, in ProcessContentUpdatePackage File "hydrus\client\media\ClientMedia.py", line 697, in _GetNext def Clear( self ): ^^^^ File "hydrus\core\HydrusData.py", line 198, in getitem def ConvertPrettyStringsToUglyNamespaces( pretty_strings ): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ IndexError: list index out of range My client might be a special case since I had a database error way back due to not fully syncing a backup. That made the client crash as soon as certain files were displayed. I deleted those and the errors (db malformed and such) never appeared again. Ever since, I kept updating whilst taking proper care of backups. The client, however, freezes many times a day at size of 70k files.
I think I tried playing an audio, and there is also a 6000x6000 png with transparent background. v593, linux, source IndexError list index out of range Traceback (most recent call last): File "/hydrus-593/hydrus/core/HydrusPubSub.py", line 140, in Process callable( *args, **kwargs ) File "/hydrus-593/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3677, in ProcessContentUpdatePackage next_media = self._GetNext( self._current_media ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/client/media/ClientMedia.py", line 697, in _GetNext return self._sorted_media[ next_index ] ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/core/HydrusData.py", line 198, in __getitem__ return self._list.__getitem__( value ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ IndexError: list index out of range v593, linux, source IndexError list index out of range File "/hydrus-593/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3538, in _PrefetchNeighbours next = self._GetNext( next ) ^^^^^^^^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/client/media/ClientMedia.py", line 697, in _GetNext return self._sorted_media[ next_index ] ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/core/HydrusData.py", line 198, in __getitem__ return self._list.__getitem__( value ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
can't remove the 6000x6000 png with Ctrl-r v593, linux, source IndexError list index out of range Traceback (most recent call last): File "/hydrus-593/hydrus/client/gui/ClientGUIShortcuts.py", line 1503, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/client/gui/ClientGUIShortcuts.py", line 1441, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/client/gui/pages/ClientGUIMediaResultsPanelThumbnails.py", line 1139, in ProcessApplicationCommand return super().ProcessApplicationCommand( command ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/hydrus-593/hydrus/client/gui/pages/ClientGUIMediaResultsPanel.py", line 2243, in ProcessApplicationCommand self._Remove( ClientMediaFileFilter.FileFilter( ClientMediaFileFilter.FILE_FILTER_SELECTED ) ) File "/hydrus-593/hydrus/client/gui/pages/ClientGUIMediaResultsPanel.py", line 1157, in _Remove self._RemoveMediaByHashes( hashes ) File "/hydrus-593/hydrus/client/media/ClientMedia.py", line 790, in _RemoveMediaByHashes self._RemoveMediaDirectly( affected_singleton_media, affected_collected_media ) File "/hydrus-593/hydrus/client/gui/pages/ClientGUIMediaResultsPanelThumbnails.py", line 757, in _RemoveMediaDirectly super()._RemoveMediaDirectly( singleton_media, collected_media ) File "/hydrus-593/hydrus/client/media/ClientMedia.py", line 811, in _RemoveMediaDirectly self._sorted_media.remove_items( singleton_media.union( collected_media ) ) File "/hydrus-593/hydrus/core/HydrusData.py", line 405, in remove_items del self[ index ] ~~~~^^^^^^^^^ File "/hydrus-593/hydrus/core/HydrusData.py", line 172, in __delitem__ item = self._list[ index ] ~~~~~~~~~~^^^^^^^^^ IndexError: list index out of range
>>16320 worked after restarting
I screwed something up with my list code last week and you may have seen some popup errors when trying to remove thumbnails, typically at the end of the list of thumbs. I have now fixed the bug, and it is now fixed on master, so if you run from source, please git pull as normal and you will be fixed. For everyone else, it is fixed for v594. Sorry for the trouble! >>16318 >>16319 >>16320 >>16321 Sorry lads, I messed this up. It was in an 'upgrade' to the list that backs the thumbnail grid last week, and while I tested it, it wasn't enough, and there was a bug that would sometimes poison the indices when the last item in the list was removed. Multiple remove events could cause this, and in some cases it even crashes the client. The 'good' news is this is just a display bug, and a restart or hitting F5 or whatever should fix everything, but I'm sorry for the trouble and worry. If you run from source, please git pull, otherwise this will be fixed for v594. >>16297 Thank you, I fixed the typo for v593! For page of pages, I don't know if that would be easy or difficult. You'd think there might be some nice QSS feature that lets you do nested widgets doing light/dark or something, like how multi-column lists can do alternating row colours, but I wonder what levers are available. I can sometimes change the colour of something with my own code, but I'm trying to move to QSS as much as possible now, so I'll have a look at this. It might be as simple as forcing, in QSS-- PagesNotebook { white } PagesNotebook.PagesNotebook { grey } PagesNotebook.PagesNotebook.PagesNotebook { white } --kind of thing. If you feel brave you might want to play with this yourself, it'll be under install_dir/static/qss. You can have ChatGPT help you out, just tell it you are writing some QSS which is like CSS but for Qt and you want to do nested control colours. If you try it, let me know how it goes.
>>16303 Thanks, I think my 'select from this stuff in a list' mini-dialog doesn't have a 'start selected here' default. I will see what I can do and update it across the program! >>16309 Yeah, the 'force page fetch' stuff in tag import options is the one you want. Set both URLs and hashes to be ignored, and hydrus will redownload. Only do this on a one-time basis, on the page you are working on, and not in the default settings--a downloader operating like this is inefficient. This comes up quite often, so I want to make it a simple one-click thing, probably with an 'import options favourites' system, and I want to pull that 'page tech' thing out of 'tag import options' to some sort of finer 'downloader import options', since page metadata governs all sorts of non-tag stuff these days. Sorry for how pain in the ass this stuff is atm. >>16311 This would be slightly tricky right now, but not impossible. The taglist doesn't have great visibility of the actual files in view so it can't, and doesn't really want to, inspect them for metadata too much. The calculation cost of summing all the files on tag might be a pain too, but this is an interesting idea. Can you talk about the sort of workflow you want to go for here? If you want it for lots of files, could it be something to do with the Client API? The API is ideal for all sorts of complicated answers, particularly when they are just for one-shot jobs or whatever. >>16315 I don't know how the 8chan click-through is working right now, but they may need your User-Agent to be the same as your browser. Hit this up in your browser https://my-user-agent.com/ and copy the top long string to your network->data->manage http headers->global User-Agent? Several CDNs work like this, linking cookies and User-Agent together (e.g. CloudFlare). If you figure it out, let me know! >>16317 Not really; the file search is the way. It would probably be overkill in your situation, but when you want to draw big red lines, using multiple 'local file domains' is usually a good solution. The classical example is sfw/nsfw, but if you stick all the sfw in its own domain, then you can search there and not have to do -nsfw or anything and it'll still run super fast and not have nsfw tags bleeding into tag autocomplete and stuff. If you really need to keep your screenshots separate, you might like to put them all in a 'vidya captures' domain, and then your duplicates search could point at 'my files' or whatever, and it'll operate a lot more efficiently than '-screenshots'. Alternately, now I think of it, you could make a 'search these for duplicates' domain, and then toss everything that isn't screenshots in there and point your duplicate filter's 'system:everything' search at that domain instead of my files. This would actually be the sensible way of trying this, yeah, and easy to undo ('you'd just delete the 'search these for duplicates' service under services->manage services). A bit more reading here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html
How to deal with a thread that's just rolling and deleting the earlier posts? This one will probably be closed soon, it has about 1000 posts a day: https://boards.4chan.org/mlp/thread/41488044 Id I should have been tagging the files, removing the watcher and creating it again.
What are good ways to tag all files from a thread as they did on the PTR? thread:/board/ - number - subject - imageboard
>>16323 >duplicates "-screenshots" is what I've been doing (in addition to hand-picked -meta:metadata indices that I have for AI-gen pictures with generation metadata or AI cards; I wish Hydrus recognized that specific kind of PNG metadata) But I'll try the domains approach, it seems more useful, though I'll keep it at defaulting to searching for duplicates. Screenshots and captures are a very specific kind of import that can easily be automated unlike everything else. Thanks.
>>16329 personally I make those all separate tags. tags are composable, so it's usually a good idea to break them into the smallest logical "chunks" that you can, for flexibility, especially if the tags are being added automatically.
I forgot to mention but I've been running with the new version of setuptools and requests since the version where you asked to test it, and everything seems to be completely fine. I haven't noticed anything wrong.
>>16331 I would consider that if I could add them automatically. Now I only know how to add post-specific data, which is less useful than the thread.
>>16323 >>>16311 >The calculation cost of summing all the files on tag might be a pain too, but this is an interesting idea. Can you talk about the sort of workflow you want to go for here? All I remember about this is that I was wondering if I could find files to delete and save space. The biggest files are usually something I don't want to delete, but if I do, related ones could go for similar reasons. I deleted thousands of thumbnails that may have been saved with DownThemAll or imported from saved web pages. >Client API? If I could program and concentrate on learning APIs and make GUIs, it would still not be in Hydrus. Can nothing even add or move a menu item? https://hydrusnetwork.github.io/hydrus/duplicates.html#future says "right-click it and select file relationships", but that's under "manage" now, so it's hardly usable.
Hi HyDev, I'm planning to make a Hydrus-inspired application, a browser specialized in cataloguing Stable Diffusion generated media. I haven't made a GUI tool before and would like your thoughts on Python+Qt - would you use these tools again if you were restarting from scratch? What were the biggest issues with Qt? -><-
>>16322 >If you feel brave you might want to play with this yourself, it'll be under install_dir/static/qss. You can have ChatGPT help you out, just tell it you are writing some QSS which is like CSS but for Qt and you want to do nested control colours. If you try it, let me know how it goes. > Lol i failed miserably. I have no clue what im doing tbh. I told the AI that i want ONLY the main-tabs THAT HAVE sub-tabs to have another color than the tabs, that have no sub-tabs. Practically all the "pages"-tabs. But it didn't work. See image. I have deleted all the qtabbar related stuff before, otherwise it would have conflicts and didnt show the changes. As you can see, only the rows that have one tab only are colored differently (OledBlack.qss used). Maybe you can extract something useful out of it idk. Not sure if all of them do something at all. Also i didn't get anything with PagesNotebook in the .qss files and AI didn't spit those out too. ------------------------------------------ QTabWidget::pane { border: 1px solid #000; background: #f5f5f5; } QTabBar::tab { background: #3498db; /* Default main tab color */ color: white; padding: 10px; } /* Tabs with nested content */ QTabBar QTabWidget::pane { background-color: #2ecc71; /* Background for tabs with sub-tabs */ } QTabBar QTabBar::tab { background: #f5f5f5; /* Sub-tab color */ color: black; } /* Selected sub-tab */ QTabBar QTabBar::tab[selected] { background: #bdc3c7; color: black; } /* Tabs without nested content */ QTabBar::tab:only-one { background: #1abc9c; /* Different color for tabs without sub-tabs */ color: white; } ----------------------------------------- Maybe it would be easier to color all the tabs differently that have the tab name "pages" if that's possible? Renaming of tabs could be a problem though. So maybe color the tabs that are created with the name "pages" and keep the color even if you rename? Just some ideas, though i don't know if anything of that is feasible.
Can you alias one namespace to another? Sometimes different sites use 'artist' instead of 'creator' for example.
>>16337 >Can you alias one namespace to another? >>16073 >Yeah. I think I've given up on the idea of a soft virtualised 'namespace sibling'. The logic would be possible but almost certainly a gigantic pain. We'll see if hard-replace covers most of the situations we care for. PTR is awaiting a huge 'artist:' -> 'creator:' migration in a similar way. I think that should answer it.
>>16338 Well damn. My bad for not using ctrl+F. I guess I'll tinker around with the downloader doing that and try to fix it for future downloads.
>>16335 Update: I may have found an existing tool that does what I need, but I would still be interested to hear your thoughts on QtPy.
Looks like Arch pyslide6 packages are fucked again. # hydrus-client python: /usr/src/debug/pyside6/pyside-setup/sources/shiboken6/libshiboken/basewrapper.cpp:1028: PyTypeObject* Shiboken::ObjectType::introduceWrapperType(PyObject*, const char*, const char*, PyType_Spec*, ObjectDestructor, PyObject*, unsigned int): Assertion `PyDict_Check(enclosingObject)' failed. [1] 39145 IOT instruction (core dumped) hydrus-client
I had a good simple week. I fixed some bugs, including the recent sometimes-problem with removing thumbnails, and cleaned up the program shutdown code. The release should be as normal tomorrow. There will also be a 'future' test for advanced users to try out. >>16341 Ah, shame. I was poking around PySide6 (Qt) today and noticed 6.8 has been rolling out over the past few days. Looks like another update today, I wonder if they hotfixed something. For anyone reading this unaware, I generally recommend not using AUR packages for python programs like hydrus as the way they work, they will always use the latest version of any python libraries. If there's a bug in the new Qt or numpy deprecates a particular call, you are then dealing with that. If you would prefer a more reliable solution that takes a couple more steps, please try running hydrus from source: https://hydrusnetwork.github.io/hydrus/running_from_source.html
>>16341 >>16343 Yep, new pyside6-6.8.0.1-1 update fixed it.
Got some traceback if it's useful: v593, linux, source NotImplementedError operator not implemented. Traceback (most recent call last): File "/opt/hydrus/hydrus/client/gui/pages/ClientGUIPages.py", line 1998, in eventFilter over_a_tab = tab_pos != -1 ^^^^^^^^^^^^^ NotImplementedError: operator not implemented.
(27.45 KB 505x344 Screenshot_202-37.png)

>>16343 >If you would prefer a more reliable solution that takes a couple more steps, please try running hydrus from source Devanon is right. I use Manjaro with Hydrus from source and never had any trouble.
https://www.youtube.com/watch?v=X6_e7v5Fe6Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v594/Hydrus.Network.594.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v594/Hydrus.Network.594.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v594/Hydrus.Network.594.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v594/Hydrus.Network.594.-.Linux.-.Executable.tar.zst I had a good simple week fixing some bugs. There is also a 'future test' for advanced users to try out. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I screwed up last week when I updated how the list that backs the thumbnail grid works. I messed up an optimisation in how it removed certain items, and it caused people popup errors when deleting/removing files. The good news is these errors were just harmless UI side stuff, fixed by re-sorting the page or restarting the client, but I am sorry for the trouble--it is now fixed, and I have some unit tests to make sure it does not happen again. The tag right-click->search menu, if you select stuff that is in the current search, now lets you say 'remove this selection and replace it with an OR of them'! I cleaned some of the shutdown code. The program should exit a bit faster and smoother, particularly when it receives emergency signals to shut down real quick. If you are in an odd situation that is often sending SIGTERM or something, let me know how it goes. Since it was tucked away before, I wrote out a fresh version of my 'two rules to not going crazy when tagging' here: https://hydrusnetwork.github.io/hydrus/getting_started_more_tags.html#tags_are_for_searching_not_describing Qt and AUR There's a new version of Qt, 6.8, rolling out this past week. AUR users had some trouble when they were auto-updated to it. A PySide6 hotfix came out yesterday to fix something, and I think I have fixed something else that users reported about the tab-bar, but I expect there will be more issues, so let me know what you run into. I'm still testing 6.7 on my own personal situation. If you are just a normal Arch user, I now generally recommend not using AUR packages for python programs like hydrus because AUR stuff will always use the latest version of any python libraries. If there's a bug in the new Qt or numpy deprecates a particular call, you are then dealing with that. If you would prefer a more reliable solution that takes a couple more steps, please try running hydrus from source yourself: https://hydrusnetwork.github.io/hydrus/running_from_source.html future test Only for advanced users! I am making another future build this week. This is a special build with libraries that I would like advanced users to test out so I know they are safe to fold into the normal release. More info in the post here: https://github.com/hydrusnetwork/hydrus/releases/tag/v594-future-1 next week I want to hammer more at duplicate auto-resolution. Maybe get an empty UI panel going.
>>16347 >This is a special build with libraries that I would like advanced users to test out so I know they are safe to fold into the normal release. Anon reporting: 1- I tested "Hydrus.Network.594-future-1.-.Linux.-.Executable.tar.zst" and it works fine, with only one caveat, the MPV viewer is not available, so, no sound available while playing videos. I looked for the mpvlib-1 or similar in the package manager but is not available. Anyway, it is not a problem for me as I currently run Hydrus from source and the MPV viewer is indeed present and works flawlessly. 2- I tested the source file "hydrus-594-future-1.tar.gz" and everything is working fine so far, even the the MPV viewer is available in the settings and the videos have sound and work as expected. (see screenshots). ----- Note: the Hydrus capabilities on the network weren't tested.
Can we discuss and report bugs on parsers from the parser repo here? To whoever rewrote or wants to fix rule34video, there are two issues: In file url parser, urls generated by search are /video/ not /videos/ In creator search the "from=1&" part breaks pagination and always returns first results. What was it even supposed to do?
Can I re-parse metadata for files already downloaded? Specifically I want to add missing "modified time" parsed from site upload time that was previously unknown.
>>16350 if you force a page fetch, it should happen automatically
Is there support for downloading posts from Bluesky on the horizon? Many artists are migrating there from Xeeter.
when you try to shut down hydrus but it says that pages are still importing, It'd be helpful if it gave the name of the pages where the imports are happening.
>>16328 EDIT: I realised after I wrote this that I assumed you meant 'how do I capture all posts'. If you meant 'how do I deal with a watcher with 23,000 items arggghhhh', then yes I think the answer is to either: - Regularly delete and re-add the watcher - Regularly click the little arrow next to the 'file log' and select 'delete x successful' etc.. to pare down to the list Before each cleanup, you might like to drag and drop the files you have to a new page for 'offline' processing, as it were. ORIGINAL: I am not 100% sure since I haven't done full testing, but I have watched cyclicals before during like E3 events and I think hydrus handles the post parsing essentially correctly. The 4chan API presents the current snapshot of posts on every check, and hydrus grabs them all and puts them in the file log correctly (even though previous only have disappeared from the current snapshot), so the question we have, if we want to keep up with everything, is to tune the hydrus watcher to check sufficiently often that it covers all posts before they fall off. The 'checker options' by default should aim to hit up the thread every three new files or something, so hydrus should be hitting a heavy thread like this quite often and won't miss stuff. If anything, you might like to have a slower check on big threads like this, something like a 'never check faster than' time of an hour or so, which would probably be best set up with a separate watcher page particularly for bigass threads. >>16329 >>16331 >>16333 You would probably want to edit the parser, but that is super advanced, particularly for these sorts of tags that would probably involve URL parsing which is often a pain in the neck. Better probably just to set a custom 'tag import options' that has 'additional tags' of 'boarld:/mlp/' or whatever. However! I think most of these tags are inappropriate for the PTR, and the PTR jannies are generally deleting these sorts of things these days. If someone posts a pepe to an mlp thread, that pepe should not be tagged 'board:/mlp/' or 'subject:rainbow dash is great' or anything like that. I generally do not like automatic imageboard tags, although I am strongly in favour of human-written imageboard tags. If there is something that is /mlp/ culture, a masterpost or a joke or a webm of a thread simulator or something, that deserves the tag; but general thread churn doesn't deserved to be shared on the PTR since other users will mostly just see it as spam. Of course if you just want to parse it to a local tag service for your own mlp-thread-tracking purposes, you can do whatever you want. >>16330 >[AI] I wish Hydrus recognized that specific kind of PNG metadata In the hydrus media viewer, up top there is an icon button for 'show weird metadata shit', and I know the PNG AI prompt text is viewable there, but I don't do any clever parsing of it. I know I have some example files with this text, but they are not state of the art and I am not close enough to AI gen culture to know what I am talking about, so can you post/send me some example pngs and describe where the text is and what format it generally is in? If there is a common pattern used by the main generation engines, I think I can probably make a flag for it somewhere and make it easier to see the prompt etc.. >>16332 Thanks. Yeah, everything seems good. The PyInstaller stuff that make the Windows and Linux builds needed an update for some setuptools stuff, but I think we are good to roll it out for v595. I had a great moment last week with ChatGPT where I was trying to figure out what the fucking error was (the program built to an exe ok, but on boot it gave an OS-tier error with ugly error text about dlls), and I just posted the error into ChatGPT with some context and it knew what was going on and said 'you probably have to update PyInstaller mate'. I asked it to search for and read through the PyInstaller changelog for if there was a fix and it said 'yeah looks like 6.7 it was fixed'. Saved me so much time and frustration. I am never Ctrl+F-ing through a long changelog again.
>>16334 Thanks, this is interesting. I don't think I have time to write something this specific, but I will keep it in mind. For duplicates, yeah, I want to overhaul a lot of the UI and workflow. It is all still too awkward everywhere. And sorry, no customisable menu yet, but I'm slowly cleaning my old hardcoded garbage to more dynamic code that'll be more user-customisable in future. >>16335 >>16340 Yeah I personally absolutely would. I love python for its ease of use and rapid prototyping, and while I started with wx, moving to Qt has overall been a huge positive. Qt has a million features and good multiplat support and stuff. HOWEVER: I am a millenial boomer who grew up with C++ and 'learn HTML in 24 hours' books in the 90s. I am severely stuck in my ways and recoil at many normal programming standards and workflows. I hate github and working with other people and learning new frameworks. If you are a zoomer who understands how cloudshit and Docker and stuff work, python and Qt may not be for you. There's rust and typescript and other new languages that I just don't understand and Electron and such that, as I understand, make it trivial to put out an application that works on a PC, a phone, or a web page. If that is what you have learned and is what is in your general zeitgeist, you probably want to go in that direction, since that's the future. Biggest issues with Qt were unlearning wxshit tbh, so that probably won't apply to you. I still have a ton of weird non-Qt-like code in hydrus that's old holdovers from how I used to do layout code and stuff. Actually working with Qt has been a delight. It has a million levers to pull, so if you have never done UI code before, I wonder if that would be overwhelming. If you know what Events are and how like you'd set Expanding or Fixed 'Sizer' options to a panel before placing it into a Layout, you are ready to rock with Qt. Qt uses a thing called a 'Signal' a whole bunch, which is basically a really nice UI-thread-compatible pubsub that makes it super simple to hook a common event like a checkbox being clicked to a method being fired. Oh, actually: python Qt is a pain in some ways, since Qt is C++ native. So making packages and stuff has provided a variety of pain in the ass problems simply because of the python layer. Nothing hugely difficult, since python and C++ are good friends, but it can be a pain. Also, now I think of it, the guys who write Qt put out some buggy releases sometimes, so never go for the latest, and I understand they keep many bug fixes to their paid branch for a while before releasing them to the public because they have an odd business model, but I'm not expert enough on it to talk cleverly. Also, I don't know exactly what it is called, but there's like 'QML' or something now, which is the 'easy mode' Qt, I think? It is two steps closer to CSS. There are tools that make the Qt panels and stuff for you, too, so you don't have to write the code. I know they teach a lot of students this way these days, rather than getting into the nitty gritty of sizer flags, so that might be something to look into.
>>16336 Interesting, thanks for giving it a go. QSS is neat in a lot of ways, but it is grappling with some odd secret levers in Qt that work in mysterious ways sometimes. I know there's a way to set a Qt-aware 'property' to a widget, and I think QSS can talk to that. So maybe I could tell page tabs 'I am empty', 'I have an odd depth number', and then custom QSS could hook into that and say 'colour empty page of pages xxx'. I'll make a job to play around with it, but some of this stuff is a bit voodoo. >>16345 Thanks for reporting this. I think I fixed it for v594, but let me know if you still have trouble. It looks like that new PySide 6.8.0.1 is a bit stricter about a thing I was doing hackily. There may be other instances like this, so any more reports would be great. >>16348 Thank you very much! It looks like this new build is ok, so I will roll these into v595. Since you run from source, I'll be recommending you rebuild your venv next week after pulling. I will probably do another future test before the end of the year to get us up to Qt 6.7, but we'll see. >>16349 >Can we discuss and report bugs on parsers from the parser repo here? Yeah go for it. Many of the creators are in the discord, but some of them check this thread and reports will likely percolate over there. >>16350 Search this thread for 'force', you'll see the way to do it with a custom 'tag import options'. I need to write up some docs for this, so we have a clean thing to point to, and write a nice right-click action to do it for you in the client, too. >>16352 If there is demand, I'm sure it will happen one way or another. I know very little about the site--can you see posts without logging in? And can you post me a couple example URLs that have content? Can you post multiple images per post, or is it one-per always? I'm looking over their API docs now and, hesitantly, things look pretty hopeful. Looks like their API is completely open? >>16353 Thanks, I'll see what I can do.
>>16356 >>>16350 >Search this thread for 'force', you'll see the way to do it with a custom 'tag import options'. I need to write up some docs for this, so we have a clean thing to point to, and write a nice right-click action to do it for you in the client, too. It should be separate from the other `tag import options`, so that the tags are set according to the downloader.
>>16354 >>>16328 >EDIT: I realised after I wrote this that I assumed you meant 'how do I capture all posts'. If you meant 'how do I deal with a watcher with 23,000 items arggghhhh', then yes I think the answer is to either: Yeah, it was that. Two threads with 2000-4000 items created millions of weight. >>16329 >>16331 >>16333 >Better probably just to set a custom 'tag import options' that has 'additional tags' of 'board:/mlp/' or whatever. I need it for local tracking of threads like "Draw thread", to choose pictures to upload to a booru, or to know if a file did not come from a useful thread. I use 'watch clipboard for urls', so if I add that to 'tag import options', it sometimes leads to mistagging. I use "threadn", "threads", and tried "thread url" a few times, because 'manage urls' does not sort by count (it probably should be able to). Namespaces are not autosuggested. I'd probably choose "tn" and "ts", but I already use "t" for something else. The lengths of these are not really important for searches, but they are for entry.
>>16359 >Yeah, it was that. Two threads with 2000-4000 items created millions of weight. How can let's say 10k files have millions of weight? If you click on "pages" -> "total session weight", the text says "A file counts as 1, and a URL counts as 20". Not adding up or you just overexeggarated and it went over my head , then sorry.
>>16354 >can you post/send me some example pngs and describe where the text is and what format it generally is in? I'll try to remember to post some examples later as I'm here procrastinating but I think it depends on the software used to generate it. The one I use to fuck around with (SwarmUI) adds its prompt data in the PNG TextualData Tag called 'Parameters'. I'm pretty sure it fits the prompt and other stuff like the seed and model etc into JSON or similar in a single line. See: https://web.mit.edu/graphics/src/Image-ExifTool-6.99/html/TagNames/PNG.html#TextualData Also to save anyone else some time troubleshooting, if you want to use exiftool to see those you need at least version 12.81.
>>16354 The metadata viewer is actually really useful, thanks; didn't even know it was there. >can you post/send me some example pngs and describe where the text is and what format it generally is in? It should all be according to this standard: https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PNG_files specifically tEXt. There is also a standard for jpeg (https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_JPEG_files) but I'm not sure which one Stable Diffusion uses – it isn't Exif, might be XMP; it's a block of text starting with a UNICODE tag. Both Stable Diffusion-related metadata and LLM-related metadata are encoded like this, primarily in PNGs. The data is usually placed near the beginning of the file, though I've seen some sorted at the end, with an arbitrarily long text payload and starting with the "tEXt" block or "UNICODE" block in JPG. Some example pictures: https://files.catbox.moe/wkilgt.zip The zip file contains: 1. Tavern character card with tEXt metadata near the start header; 2. Stable Diffusion image with generation data also in the tEXt block. I've seen only one unpopular frontend actually do this, so it probably doesn't need to be handled unless you feel like it. 3. Same as 1., but re-exported so the tEXt data is appended to the end, which fails to be picked up by Hydrus; 4. which is a Stable Diffusion jpeg image with metadata. 8chan.moe strips the relevant metadata, so I have to post it zipped.
>>16362 >>16365 >Deleted Too late, queer. I already save that smug Vaporeon.
>>16365 Forgot to add, Hydrus also fails to find the relevant metadata in the .jpg file, instead only displaying the exif data.
>>16366 You've a metadata-less version. Its smug aura is therefore inferior.
>>16368 I can just make up some meta data. In fact, mine will be better, with blackjack and hookers!
>>16360 I'm not sure. I think `total session weight` was 7M, and the page containing that thread (the number the was something like 3200 or 3600), its desuarchive mirror and a couple smaller threads was 1-3M. After deleting them, it was 3-5M. Even if it was the fifth day, the desuarchive mirror (while it worked) could not have contained more than 8000 posts, so there is no way there was over 50000 urls in the page.
(5.70 MB 360x360 rolling eyes.gif)

>>16370 >I'm not sure. I think `total session weight` was 7M Screenshot or GTFO.
>>16372 I just started Hydrus 594 with a backup from Oct 12th. A page with the thread had over 800k weight, but only 200-something files from the thread. I tried closing the other pages for a screencap, but it started downloading files, and the page's weight changed to 140k and now it's 57k with 968/1235 in that thread, 143 in another thread, 11+142+11+3 in DEAD threads.
>>16333 I got it to add the thread subject to the first image: 4chan thread api parser: 'subsidiary page parsers' -> 'posts' -> 'content parsers' -> 'get the entries that match "sub"' [30, 7, ["subject to tag", 0, [31, 3, [[[0, [51, 1, [0, "sub", null, null, "sub"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[9, ["<br>", "\\n"]], [9, ["</?[^>]+>", ""]], [9, ["&lt;", "<"]], [9, ["&gt;", ">"]], [9, ["&quot;", "\""]], [9, ["&amp;", "&"]], [9, ["&#039;", "'"]]], ""]]]]]]]], "threads"]]
I'm trying to download the images of every /ldg/ general in archived.moe but I don't even know where to start Sum help?
>>16376 since archive.moe server is fucking dead I have been testing out with thebarchive, I managed to extract eevery single archive thread url I want Now, how do I download the images for them? I only want pngs, also
>>16376 >>16377 I also once tried to find a way (for a certain username) on archived moe, but i think we are out of luck, since it has some DoS protection thing going on. If you tabs for 5 images relatively fast, you get a cooldown period before you can open more. I think the cooldown starts from image 1 already, but you reach the limit when opening 5 images fast. A downloader would have to be set up so a download only every some seconds happens. if you guys find a way and can give me a lil tutorial, let me know.
I had a great week. I improved some quality of life, sped up downloader and subscription load times, wrote some more OR predicate commands, and figured out an easy way to re-download files' metadata. Also, a user has figured out native Ugoira rendering! The release should be as normal tomorrow.
>>16380 >native Ugoira rendering! Aw shit nigga!
>>16379 I tried gallery-dl since it support the archives but can't get past cloudflare no matter what, added cookies and tried 50 different usser agents still nuffin
>>16331 >>16333 >>16354 >>16374 It is actually very easy. '4chan thread api parser' uses `subsidiary page parsers` for everything, but this should apply to all posts, so put this (very basic, no entity cleanup or thread-specific corrections) in the 'content parsers' tab, which works before the separators: [26, 3, [[2, [30, 7, ["first comment to thread subject tag", 0, [31, 3, [[[0, [51, 1, [0, "posts", null, null, "posts"]]], [2, 0], [0, [51, 1, [0, "com", null, null, "com"]]]], 1, [84, 1, [26, 3, [[2, [55, 1, [[[6, 96]], ""]]]]]]]], "threads"]]], [2, [30, 7, ["thread # to tag", 0, [31, 3, [[[0, [51, 1, [0, "posts", null, null, "posts"]]], [2, 0], [0, [51, 1, [0, "no", null, null, "no"]]]], 1, [84, 1, [26, 3, []]]]], "threadn"]]], [2, [30, 7, ["thread subject to tag", 0, [31, 3, [[[0, [51, 1, [0, "posts", null, null, "posts"]]], [2, 0], [0, [51, 1, [0, "sub", null, null, "sub"]]]], 1, [84, 1, [26, 3, []]]]], "threads"]]]]] For desuarchive: [26, 3, [[2, [30, 7, ["first post as subject to tag", 0, [31, 3, [[[2, 0], [0, [51, 1, [0, "op", null, null, "op"]]], [0, [51, 1, [0, "comment", null, null, "comment"]]]], 1, [84, 1, [26, 3, [[2, [55, 1, [[[6, 96]], ""]]]]]]]], "threads"]]], [2, [30, 7, ["thread # to tag", 0, [31, 3, [[[2, 0], [0, [51, 1, [0, "op", null, null, "op"]]], [0, [51, 1, [0, "thread_num", null, null, "thread_num"]]]], 0, [84, 1, [26, 3, [[2, [51, 1, [2, "^((?!None).)*$", null, null, "this is not none"]]], [2, [55, 1, [[[9, ["&amp;", "&"]]], ""]]]]]]]], "threadn"]]], [2, [30, 7, ["thread subject to tag", 0, [31, 3, [[[2, 0], [0, [51, 1, [0, "op", null, null, "op"]]], [0, [51, 1, [0, "title_processed", null, null, "title_processed"]]]], 0, [84, 1, [26, 3, [[2, [51, 1, [2, "^((?!None).)*$", null, null, "this is not none"]]], [2, [55, 1, [[[9, ["&amp;", "&"]]], ""]]]]]]]], "threads"]]]]] Not tested much.
>>16384 I haven't figured out how to make it use the first post only if there is no subject.
>>16384 The final data needs to be fetched as a string, not as JSON, or the quotes will be in it. >>16385 The default parser seems to always use the first post's comment?
>>16385 Why is this not working? It takes the subject and the beginning of the OP, joins them with a UUID with A on each side, and then it should remove the OP and the separator if there is a subject, or only the separator if there is no subject. [30, 7, ["thread subject or start of op to tag", 0, [59, 2, [[26, 3, [[2, [31, 3, [[[0, [51, 1, [0, "posts", null, null, "posts"]]], [2, 0], [0, [51, 1, [0, "sub", null, null, "sub"]]]], 0, [84, 1, [26, 3, []]]]]], [2, [31, 3, [[[0, [51, 1, [0, "posts", null, null, "posts"]]], [2, 0], [0, [51, 1, [0, "com", null, null, "com"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[9, ["<br>.*", ""]], [6, 96]], "Warm spirits edition"]]]]]]]]]]], "\\1Ae253a83c-49a8-45a5-a198-558727871480A\\2", [84, 1, [26, 3, [[2, [55, 1, [[[9, ["^(.+)Ae253a83c-49a8-45a5-a198-558727871480A.*", "\\1"]], [9, ["^Ae253a83c-49a8-45a5-a198-558727871480A", ""]]], "Ae253a83c-49a8-45a5-a198-558727871480Asecondpart"]]]]]]]], "threads"]]
>>16382 I haven't tried gallery-dl. Is it possible to download at least some files (4-5) with it and then you get limited by cloudflare or can't you download anything at all? If you can, can you set it up to download one file every let's say 5 seconds to circumvent that block?
>>16390 > can't you download anything at all? Can't even touch the page due 403 cloudflare errors Maybe there is a way around that that I'm missing
>>16380 This is so huge for me. Is there any easy way to pull up a history of all the urls that have been skipped because of the "Ugoira Veto" in the downloader? Worst case, I'll just go thru all my bookmarks manually.
https://www.youtube.com/watch?v=2fsMD9tPFY0 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v595/Hydrus.Network.595.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v595/Hydrus.Network.595.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v595/Hydrus.Network.595.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v595/Hydrus.Network.595.-.Linux.-.Executable.tar.zst I had a great week. I've got several quality of life improvements, and a user has figured out native Ugoira rendering. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html Ugoiras Ugoiras are an unusual gif-like animation format from Pixiv. Getting them to play in hydrus has been a long time goal, and thanks to a user who put a bunch of work in, they now work out of the box. You don't have to do anything; any Ugoiras you have in your client will now play in the media viewer. If your Ugoiras have animation timing data inside their archive (those downloaded with PixivUtil or gallery-dl should have these!), hydrus will parse this and render frames for the correct duration. There is also an option to present this info in a file note. If there is no timing data, hydrus will fall back to 8fps. There's more technical data here: https://hydrusnetwork.github.io/hydrus/filetypes.html#ugoira All your Ugoiras will get a metadata rescan on update, so it might take a few minutes for any duration data to fully kick in. We are now planning possible ways we can add the Ugoira timing data using the hydrus downloader engine, which would finally allow in-client downloading of properly animating Ugoiras. Let me know how today's update works for you, and we'll see what we can do. other highlights All multi-column lists now sort case-insensitively. A subscription called 'Tents' will now slot between 'sandwiches' and 'umbrellas'. All 'file logs' in importers now load much faster. If you had slow subscription or session load, let me know how things feel now. The 'Favourite Searches' star button now supports nested folders in a hacky way. Just put '/' in the 'folder name', and it'll make submenus for you. I fixed a bug where setting a pair of potential duplicates as 'not related' would sometimes merge modified file dates. You might get a popup about it on update about it to reset file modified dates for any files that were previously affected. OR predicates get some more work--you can now 'start an OR with selected', which opens the edit panel and then replaces, 'dissolve' an OR back to its constituent sub-preds, and the menu and UI got a little polish. The 'urls' thumbnail right-click menu has a new 'force metadata refretch' command, which makes it easy to redownload Post URLs to get tags and stuff again. You don't have to manually set up the url downloader with a custom 'tag import options' any more--hydrus can do it for you. new build The build today folds in some library updates the advanced users and I have been testing. As far as we can tell, there are no special update instructions needed, so just update as normal. If you are on an old version of Windows/Linux and you have any problems booting, let me know. next week I did some placeholder duplicates auto-resolution UI this week, and I'm feeling better and better about the whole system. There's still a lot to do, but none of it looks too crazy. I'd like to keep pushing on it a bit. Otherwise I'd like to just do some boring code cleanup. >>16392
[Expand Post]Check the changelog for more details, but note we aren't at the point of downloading nice Ugoiras inside hydrus itself yet. For PixivUtil or gallery-dl external downloading, I'm afraid I don't think there is a nice way to fetch these vetoes again. You might check your Pixiv subscriptions' file logs--maybe I don't delete 'ignored/vetoed' download results (which all these Ugoira links would be)? Let me know how you get on here. Maybe there's a metatag or just 'ugoira' in hiragana or something that helps to narrow down a raw search in Pixiv for this sort of job.
I know I already talked about this here, and I'm not suggesting that you should consider using it as a default for everyone, but I wanted to point uv to you again, because it keeps amazing me. As an example, when upgrading from 594 to 595, you upgraded requests and setuptools; it took less than one second on my machine (no cache or anything, right after the pull): (venv) $ uv pip install -r requirements.txt Resolved 66 packages in 558ms Prepared 2 packages in 167ms Uninstalled 2 packages in 43ms β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘ [0/2] Installing wheels... Installed 2 packages in 34ms - requests==2.31.0 + requests==2.32.3 - setuptools==69.1.1 + setuptools==70.3.0 Thanks for the update, as always!
>>16388 To clarify, the second regexp is not working: when there is no subject, the parser returns nothing, although the second part of it returns a string.
>>16379 archived.moe sucks, it spams cloudflare at you and doesn't even save full images (it redirects you to other archives) use any other archive
I just updated from 588 to 595 and noticed that the gelbooru parser badly parses md5 hashes, resulting in this error in the parsed tags preview in the parser editor: >md5 hash: Could not decode a hash from οΏ½J`οΏ½oοΏ½οΏ½DοΏ½tοΏ½οΏ½o: Exception('Could not decode hash: non-hexadecimal number found in fromhex() arg at position 0') This results in the files being downloaded again if they have no gelbooru url set yet, instead of being instantly recognized by the md5 hash and only pulling the metadata. Maybe this is a global error that has something to do with the hash conversion changes I read in the changelogs?
>>16406 Tested danbooru and that one seems fine. It appears it's an error in the gelbooru 0.2.5 file parser, which has an extra "decode from hex" conversion after the regex in its md5 content parser, which breaks the hash. No idea if any other parsers have this error too.
>>16315 >>16268 You need to create an 8chan specific header that has the same user-agent as your browser. Then you need the 'bypass' and 'TOSxxxxxxxx' cookies, where the 'x's in the second one represent the current date, so you have to replace that cookie every new day you want to download something from 8chan.
>>16358 >>16393 >The 'urls' thumbnail right-click menu has a new 'force metadata refretch' command, It seems that you've also made it update metadata for existing files from new pages with them? Nice.
when Hydrus opens the url editor for multiple files, it gives a warning in red text that it's only appropriate to add gallery urls to multiple files. This isn't true. It's also okay if you're adding a post url that contains multiple files.
Can the duplicate filter open on "all my files" instead of the service of the source tab?
>>16393 >ugoira Fucking nice. Also, is there a sort order to group similar files? I've got a lot of files by one artist who likes to make several variants per base image, but they're uploaded to sites separately and the sets mix together. I'd like a way to easily group them to make processing easier. Probably a long shot but I'd figure I'd ask, with so many features it's hard to figure out how to do things sometimes.
>>16413 >Also, is there a sort order to group similar files? Nope. Would be nice if you could group files by relationships. I requested phash and blurhash sorting a while ago, because I thought they would help, but they don't.
Would you mind adding an option to GUGs to tell hydrus that the input is a full url, so leave the slashes and other characters intact and pass it directly to parsers/api-converters? The exhentai downloader in the cuddlebear repo is a lot more complicated than it needs to be, because it has to work around the fact that you can't pass a string with slashes to the GUG without them getting converted by Hydrus, but the part of exhentai urls that you need includes slashes. Being able to just use the full url and tell hydrus to leave it as is so that the parser can work with it directly would be very helpful and allow the downloader to be much simpler than it currently is.
>>16413 >>16414 I've requested this in the past too. I think the issue is while most alternate sets are simple, some more complex ones could cause issues.
>>16358 Yeah I totally agree. Those checkboxes are in tag import options from the days when we were mostly just parsing tags, but I want to extract them to a 'parsing import options' or something so you can configure one without messing with the others. That whole system needs better favourites/profiles management too. >>16410 Yeah it should just do what you would want. No need to set up the page by yourself anymore, I do it programmatically. >>16361 >>16365 >>16367 Thank you, I will put some time into this! >>16395 That is pretty amazing! I have trouble in the past with trying to edit venvs in place (some complex stuff like Qt has had imperfect uninstall before), so I accepted the school of 'just reinstall the whole thing every time', but if this can navigate that stuff, perhaps it should be noted in the 'running from source' help as an advanced option. Please keep me updated with how it goes in future. >>16406 >>16407 Damn, thank you! I will fix. Yeah, something must have got messed up when I re-enabled the hex decode stuff. >>16411 Thanks, I will reword this. >>16412 No, it has to be locked to what the search is set to. Can you describe what you would like to do, or what having the service set to 'my files' or whatever does not show, that you would want to see/behave in 'all my files'? >>16413 >>16414 >>16416 Not yet, is the simple answer. Thumbnails don't 'know' about their file relationships yet, so what you see is actually smoke and mirrors as I do quick database requests in the background on a single-file basis. When I integrate file relationships into the master UI-level media object and plug in the content update pipeline so it all stays synced with the database and stuff, then I will have instant access to this data at the UI level and I'll be able to do stuff like 'sort by duplicate group'. Some of this stuff is non-sortable--potential duplicates are their own complicated kettle of fish--but some stuff should be doable. >>16415 Interesting thought! I'll have a look at the code and see how doable this would be.
>>16419 >>16361 >>16365 >>16367 >Thank you, I will put some time into this! I've been "working" on a script that is able to pull the metadata from images and parse them into tags for a few months now and I can tell you how the various UIs store their metadata. For PNGs they are simply stored in image.info (tEXt chunk) as: >parameters for WebUI (plaintext) and SwarmUI (json) >prompt and workflow (both json) for ComfyUI, where prompt is a simplified more readable version of workflow, which stores all the node info >prompt and comfyBoxWorkflow (both json) for ComfyBox, same as above >Title, Description, Software, Source, Generation_time and Comment for NovelAI, all are plaintext except Comment, which is a json with full gen info, while Description only has the positive prompt (which is also in the Comment json) For JPEGs I only know how WebUI ones are stored and they are simply in EXIF UserComment, it's probably gonna be the same for the rest. You could also probably check here how they read the metadata: https://github.com/receyuki/stable-diffusion-prompt-reader And if anyone is interested, I could post the script, which is like 99% complete, it's just that I recently tested it on like 1k files and noticed a few badly parsed tags (compared to another script I was using before I decided to make my own), so I want to fix that first, but other than that, it's pretty much finished.
>>16419 >>>16412 >No, it has to be locked to what the search is set to. Can you describe what you would like to do, or what having the service set to 'my files' or whatever does not show, that you would want to see/behave in 'all my files'? If the original file is in "downloaders processing" and the existing file is in "SFW archive", and "downloaders processing" is selected, it doesn't show the pair.
Say there are two canonical copies of a file. One of them has no file url or page specified. The other one of them has a file url and a canonical page. But I downloaded it from another page, so in the duplicate filter, I chose to delete it. Then I download the canonical page. Will Hydrus just ignore it as a deleted file and not add tags or url to the other canonical copy, and there will be no canonical page for the file in Hydrus?
>>16424 >a canonical page. a watcher thread
Sub-formulae in ZIPPER need editable descriptions.
>>16419 I know solving captchas has been brought up before but have you looked at this https://github.com/FlareSolverr/FlareSolverr ? I want to be able to scrape e621 over tor. They even have examples for python integration and it's in the AUR. >>22244 This is horrendously late but if you're trying to fix your btrfs system you should run scrub. If you're trying to rescue files you should mount with rescue=all. If you already ran --init-csum-tree or --check with repair it's probably ogre.
>>16427 also need the ability to copy them, for example to move >>16388 (or a fixed version) deeper to build the whole string with board name etc.
After updating it's been having a problem where mp4 and webm files are unable to be imported. Also has been giving error messages about FFMPEG. "none of the 1 files parsed successfully: 1 had unsupported file types." and "FFMPEG was recently contacted to fetch version information. While FFMPEG could be found, the response could not be understood." and "FFMPEG, which hydrus uses to parse and render video, did not return any data on a recent file metadata check!"
I had a good week. I fixed a couple of important bugs and improved some quality of life. The release should be as normal tomorrow. >>16435 What's your OS, and how are you running hydrus? If you are Windows, and the normal windows extract or installer, is there an ffmpeg.exe in your install_dir/bin folder? Not sure exact size, but probably like 135MB. If you are Linux or macOS, and Windows if you know how, what happens if you open a terminal (in the bin dir if Windows) and type 'ffmpeg -version'? I get: ffmpeg version 7.0.1-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers built with gcc 13.2.0 (Rev5, Built by MSYS2 project) Followed by a big list of compiler options and some library versions. The top line is the one hydrus is trying to parse when you open help->about. If you open 'install_dir/db/client - date.log' and ctrl+f for the 'recently contacted' line, I think it will be followed by STDOUT and STDERR, which may shine more light.
https://www.youtube.com/watch?v=amKt8ttja1E windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v596/Hydrus.Network.596.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v596/Hydrus.Network.596.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v596/Hydrus.Network.596.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v596/Hydrus.Network.596.-.Linux.-.Executable.tar.zst I had a good week with a couple important bug fixes and some UI quality of life improvements. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I messed something up last week and it broke several downloaders' hash lookups, which are sometimes used to determine 'already in db'/'previously deleted' quickly. The problem is fixed today, sorry for the trouble! Any advanced users who took advantage of the new hex/base64 string converter decoding tech last week, please check the changelog. I also fixed an issue with the Client API, where it was not dealing with file_ids that do not exist properly. If you were doing manual API jobs and put in a random id as a test and had any problems afterwards, let me know and we'll figure out the fix for your case. The big ugly list of frame location info under options->gui has some new buttons to quickly flip 'remember size/position' and clear the 'last size/position' to several rows at once. I'm going to try to expand these listings to cover more window types--e.g. Mr Bones doesn't have one yet--so let me know if there is anything you would like to have its own size and position and stuff, and the whole thing, if I can get myself in gear, could do with a usability pass. The review services panel will now not be so tall if you have the PTR. I hacked in some expand/collapse tech for a layout box I use all over the place. Let me know what you think, because I'll probably use it in some other places and figure out collapse memory and stuff. Animations with an fps below 10 or 1 will now show to two significant figures, rather than just being rounded to the nearest integer. You'll see 1.2fps and 0.50 fps. Thanks to a user, the Client API can now render Ugoiras into apng or animated webp! next week I did some good cleanup this week and moved duplicates auto-resolution just a little bit further forward. I'll keep pushing on that and do more small jobs.
595, 596 Some tags that appear in the sidebar don't appear for some files and are not counted for them in manage tags. They are not siblings or parents. The ones I tried are meta tags, with or without another colon.
>>16437 edit import folder > file log won't open v596, linux, source TypeError object of type 'NoneType' has no len() File "/home/hy/hydrus-596/hydrus/client/gui/importing/ClientGUIFileSeedCache.py", line 960, in _ShowFileSeedCacheFrame panel = EditFileSeedCachePanel( dlg, self._controller, dupe_file_seed_cache ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hy/hydrus-596/hydrus/client/gui/importing/ClientGUIFileSeedCache.py", line 379, in __init__ self.widget().setLayout( vbox ) File "/home/hy/hydrus-596/hydrus/client/gui/lists/ClientGUIListCtrl.py", line 138, in data display_tuple = tuple( ( HydrusText.GetFirstLine( t ) for t in display_tuple ) ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hy/hydrus-596/hydrus/client/gui/lists/ClientGUIListCtrl.py", line 138, in <genexpr> display_tuple = tuple( ( HydrusText.GetFirstLine( t ) for t in display_tuple ) ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hy/hydrus-596/hydrus/core/HydrusText.py", line 229, in GetFirstLine if len( text ) > 0: ^^^^^^^^^^^
>>16436 Windows 10, running it from the .zip version extracted to the desktop. The ffmpeg.exe is there in the bin folder, it's 141mb. >>16436 >open a terminal (in the bin dir if Windows) and type 'ffmpeg -version Is this supposed to work in the command prompt also or only in linux? I tried it and it didn't give any results. Not sure if this is the section you are referring to, but here's what it says in the log: STDOUT Response: b'' STDERR Response: b'' v595, 2024-10-31 06:21:45: Problem parsing mime for: C:\Users\anon\Desktop\9a22d1adb271e4e50aa8052b2c5a27a7.mp4 v595, 2024-10-31 06:21:45: ==== Exception ==== DataMissing: Cannot interact with video because FFMPEG did not return any content. ==== Traceback ==== Traceback (most recent call last): File "hydrus\client\gui\panels\ClientGUIScrolledPanelsReview.py", line 3319, in THREADParseImportablePaths File "hydrus\core\files\HydrusFileHandling.py", line 836, in GetMime File "hydrus\core\files\HydrusVideoHandling.py", line 307, in GetMime File "hydrus\core\files\HydrusVideoHandling.py", line 211, in GetFFMPEGInfoLines hydrus.core.HydrusExceptions.DataMissing: Cannot interact with video because FFMPEG did not return any content. ==== Stack ==== File "threading.py", line 1002, in _bootstrap File "threading.py", line 1045, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 452, in run File "hydrus\client\gui\panels\ClientGUIScrolledPanelsReview.py", line 3324, in THREADParseImportablePaths File "hydrus\core\HydrusData.py", line 358, in PrintException File "hydrus\core\HydrusData.py", line 389, in PrintExceptionTuple ===== End =====
Artist here, used Hydrus for 5+ years, contacted you several times already and you helped me resolve some issues in the past. (Mac user having problems updating 40+ versions at once lol).. Just wanted to say thank you once again. My library of art references is immense and so easy to peruse and a joy to organise. This really tickles my autism and helps out my work. I can't wait to explore the potential of Hydrus more in future, creating my own private / password protected online booru for example, for a small team of artists to share and grow. Thanks again!
When trying to download using the twitter profile lookup, I only get this error: ParseException("Page Parser twitter syndication api profile parser: Content Parser next page: Unable to parse that JSON: JSONDecodeError('Expecting value: line 1 column 1 (char 0)'). Parsing text sample: ")… (Copy note to see full error) Traceback (most recent call last): File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2006, in _ParseRawTexts j = CG.client_controller.parsing_cache.GetJSON( parsing_text ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/caches/ClientCaches.py", line 77, in GetJSON json_object = json.loads( json_text ) File "json", line 346, in loads return _default_decoder.decode(s) File "json.decoder", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "json.decoder", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2501, in Parse parsed_texts = list( self._formula.Parse( parsing_context, parsing_text, collapse_newlines ) ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 830, in Parse raw_texts = self._ParseRawTexts( parsing_context, parsing_text, collapse_newlines ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 951, in _ParseRawTexts stream = formula.Parse( parsing_context, parsing_text, collapse_newlines ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 830, in Parse raw_texts = self._ParseRawTexts( parsing_context, parsing_text, collapse_newlines ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2021, in _ParseRawTexts raise HydrusExceptions.ParseException( message ) hydrus.core.HydrusExceptions.ParseException: Unable to parse that JSON: JSONDecodeError('Expecting value: line 1 column 1 (char 0)'). Parsing text sample: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2899, in Parse whole_page_parse_results.extend( content_parser.Parse( parsing_context, converted_parsing_text ) ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2509, in Parse raise e hydrus.core.HydrusExceptions.ParseException: Content Parser next page: Unable to parse that JSON: JSONDecodeError('Expecting value: line 1 column 1 (char 0)'). Parsing text sample: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/importing/ClientImportGallerySeeds.py", line 505, in WorkOnURL all_parse_results = parser.Parse( parsing_context, parsing_text ) File "/Applications/Hydrus Network.app/Contents/MacOS/hydrus/client/ClientParsing.py", line 2913, in Parse raise e hydrus.core.HydrusExceptions.ParseException: Page Parser twitter syndication api profile parser: Content Parser next page: Unable to parse that JSON: JSONDecodeError('Expecting value: line 1 column 1 (char 0)'). Parsing text sample:
Anyone else getting 403s when downloading from pixiv or is it just me? I tried sending over cookies but other than that, im not sure how to troubleshoot this. Is it something got to do with cloudflare?
>>16420 Thanks! I can't promise a huge amount here but I think I'd like to highlight recognised and easy-to-access prompts in hydrus. I would be interested in seeing the script when you are happy with it. >>16423 Ah, yeah, see if you can change the search context to 'all my files'. That is an umbrella service that covers all of your local file services. Or, if you turn on help->advanced mode, you should be able to hit up 'multiple/deleted locations' at the bottom of the file service selector menu and then with checkboxes select the union of your processing and archive services. I think that'll work, but a union will probably run super slow in the duplicates search system, so best to use 'all my files' if you can. >>16424 It can depend, and there are two parts: Getting Metadata: In the typical booru case, if a file gets 'previously deleted' result off a URL it has not seen before, it figured that out my fetching the page and seeing a hash it matched up with its local store, or it ended up downloading the file and saw it knew it before. In both cases, hydrus parsed the page, so no matter what it will assign the metadata to the file, even though it is previously deleted. Duplicate Files: If you say that A > B, although the metadata (tags, urls, whatever) may be merged from B to A at that time, I do not have permanent content 'sync', so future updates to B will not transfer to A. I do remember that A > B though, and I broadly, some years from now, expect to implement some sort of retroactive fill-in or sync for content merge across duplicate groups. So--if B gets new metadata today, A won't get it now, but it might get it retroactively in the future. >>16427 >>16429 Thanks. I will figure something out! >>16428 I had some captcha tech a million years ago, when hydrus had a 4chan dumper. I actually fetched the challenge and presented it in UI for the user to fill out. We've also used CloudScraper for a while, which for a time did a little bit of CloudFlare simple challenge solving. Unfortunately, all captcha shit has become much more complicated in the years since, and with AI nipping at its heels, I am worried it will only get super complicated and then become Real ID. My general feeling, no matter what happens, is that captchas are going to be changing a lot in future, so I want any hydrus integration to be loosely coupled. I can see a future where hydrus is told to somehow talk to that FlareSolverr tool as a third-party tool, but I think I would keep it all like that, as arbitrary API requests that hydrus makes of something else before hitting a domain. I'm sorry to say that either a tightly or loosely coupled solution would be a decent amount of work. There are a bunch of improvements I want to make to my network engine before I think about things like that too--there's still a lot of UI I want to backfill and a proper 'domain manager' with per-domain options for connection times and proxies and things. Ultimately, I think the best and simplest solution we have for now (although I don't know how well this works for Tor) is to log in to complicated places with your browser and copy your cookies and User-Agent over to hydrus either manually or with Hydrus Companion. This fixes most situations, including normal CloudFlare stuff.
>>16439 Thank you for this report. We've had 'ghost tags' sometimes appearing in the sidebar, where they come from a previous search or file selection and don't get removed correctly when you move to a new selection. Let's see if we can differentiate exactly what's going on: When you see these tags, does a client restart clear them up, or are they still there? Can you search for these tags as normal, and do they produce files? For the files that actually appear to have the tags, do the tags show in 'manage tags' there? Under tags->manage tag display and search, do you have any 'Tag filters' set up for your services? Might they cover any of these tags? >>16440 Thank you, someone else reported this, also in an import folder. You may have seen it re-scanning some already-in-db files. I think something got messed up with _some_ local file imports when I sped up URL loading recently. I will make sure it is fixed for v597. >>16441 Damn, this is odd. I don't remember seeing that 'did not return any content' problem before. Sorry, yeah, I forgot, for the command prompt thing, if Windows opens Powershell, you might have to run a slightly different command. From the top: - Do Shift+right-click on your 'bin' folder and select open in terminal/powershell - The thing that loads, if it is powershell, type 'cmd' to get the old command terminal up - Now you can do 'ffmpeg -version' - In powershell, I think you would do '.\ffmpeg -version' But by the sounds of it, ffmpeg doesn't want to give you any data at all. Hydrus sees and runs the exe, but both STDOUT and STDERR are blank. Is there any chance you have some anti-virus thing stomping on it? I don't know what would cause it to be silent, not even give a 'sorry, I cannot run because (error)' on STDERR. It seems to running perfectly fine but not giving any data back. Let's see if the terminal response is any different. If it runs without error but just goes back to prompt instantly, then I guess it is the same problem. The ffmpeg build we use in Windows is pulled from here every week: https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full.7z Feel free to get it again and swap it into place. Maybe your anti-virus will sperg out when you try to extract it, or maybe your exe got fucked somehow (hdd error?) and you just need a new one. Maaaaybe your Windows 10, if you have that odd version of like 'Media lite' Windows (Windows N, is it called?), needs to get some dll so ffmpeg will run properly. Do you have a special version of Windows, like an Education version, or is it just normal Home/Pro? >>16442 Hell yeah, I am really glad you getting something out of it! >>16443 Unfortunately twitter parsing is all fucked. Elon shut down open access soon after the acquisition, and we saw our various ways in shut off one by one. I no longer include that twitter profile search in new installs of the client (you only still have it because you are a long-time user, I think), so I think the best thing is just to delete it. We do however have good single-tweet support. Just drop a twitter.com (or x.com, rather) URL on the client, and we use either fxtwitter or vxtwitter, I forget which, to get multi-images, videos, whatever, with very good reliability. But twitter search, last I checked, costs $5,000 a month now. >>16444 403 does tend to be what CloudFlare gives. I don't know if Pixiv use CF. You might like to try wiping all your cookies for the pixiv domain(s) under network->data->review session cookies before syncing again, just to make sure it is a clean copy, and make sure your User-Agent (network->data->manage http headers) is also the same as your browser. CF needs cookies and User-Agent to match to pass the test. Otherwise, a good tactic is to change IP, if you use a VPN. Sometimes CF or other CDNs apply very very strict rules for brief periods to swaths of regional IPs or whatever using dynamic systems that are trying to stop DDoS and so on, and you get just caught up in someone else's bullshit. And lastly, you can try help->debug->network actions->fetch a url. Put in a pixiv URL and see if it is willing to give you the data. It might sperg out at the 403, I'm not sure. Paste it into a text editor and see if any of the english text has CloudFlare challeng shit in it. If it has 'redirecting you to English' or something like that, then there's probably something I'm not handling correctly on my end. Let me know how you get on!
>>16445 >I would be interested in seeing the script when you are happy with it. Did the changes I wanted to make, so I'd say it's v1.0 now. Maybe I should put it on github or something. https://files.catbox.moe/qx4szm.7z Anyway, hydrus_process_ai_metadata.py is the main file, the other two are auxiliary files with code from other projects. The function you'll want to check is get_metadata_from_image_info, which is the one that handles the various ai software metadata detection, and then maybe get_image_info, process_images_from_hashes and load_json. Also the final change I did was rework exif handling using piexif instead of ExifTags from PIL. The reason for that was that I needed to decode exif after the data was already pulled from the image, while PIL needed the full image as input, but you can definitely still do it with PIL. Supposedly piexif better handles various UserComment decoding too, as one file that wasn't detected before the change used ASCII, which the old code didn't support. Here's the old PIL code: exif_data = image._getexif() if exif_data: for tag, value in exif_data.items(): if ExifTags.TAGS.get(tag) == "UserComment": if value[:8] == b'UNICODE\0': return value[8:].decode('utf-16-be', 'ignore') else: return value[8:].decode('utf-8', 'ignore')
>>16450 Oh I should probably mention that the if "comment" in image_info: part is supposedly for gifs, but I don't have any gifs with metadata to test it. Also I didn't restrict it to gifs, just in case other formats use that to store metadata.
I have some duplicates to search for at distance 10, and more at distance 12. I would like to process it for 12 eventually, but now I only have the time to finish the 10. Processing at 12 processes at 10, but how efficient is it? Could Hydrus start processing at 12 with the files that are not processed at 10 yet?
>>16446 >>>16439 >Under tags->manage tag display and search, do you have any 'Tag filters' set up for your services? Might they cover any of these tags? Oh, all the files have the tag in a different tag service, and only some have it in the service I was looking at! Sorry. Time to merge that old tag service into the new one.
>>16437 >The big ugly list of frame location info under options->gui has some new buttons to quickly flip 'remember size/position' and clear the 'last size/position' to several rows at once. I flipped some sizes/positions from true to false and vice versa and then tried the reset buttons after that, but they don't seem to change values back at all? Also after clicking on the 'apply' button and opening the options again, nothing changes. >max implicit system:limit in options->search is raised from 100 thousand to 100 million I thought for old users you didn't force change it, but it turns out if the 'no limit' checkbox is checked (which i always had), it resets to 10 thousand. I don't think that it is supposed to do that? I guess it should either reset to 100 million or whatever you actually put in there and remember it, even after checking the 'no limit' checkbox and clicking the 'apply' button.
>>16446 >make sure your User-Agent (network->data->manage http headers) is also the same as your browser. arigato! that seemed to fix the problem. funny because i thought i had already changed that not too long ago
>>16445 >>16420 >>16450 Small update, but I just discovered that ComyUI metadata can only have "prompt" without "workflow" in Image.info. Probably from some kind of fork or another ui that uses comfy as a base.
(1.59 MB 256x192 1463002295523-1.gif)

>Use Hydrus for years >Know I can just drag an drop files to export to my browser for posting >Only just now realize I can drag and drop files to quickly export to a folder instead of going through a few submenus to do so, manually picking the folder ever time
(184.22 KB 894x1007 red anonfilly.png)

>>16469 I didn't know that. Thanks anon.
>>16469 >>16470 I knew that but as far i know, it doesn't rename the files with the pattern shortcuts that it uses when you right-click -> export. So maybe there should be a checkbox somewhere (export files window or options) to allow renaming with the already existing pattern shortcuts that are in the 'export files' window when doing drag & drop. Also i would like to have a {#} (file order) pattern that fills in zeroes and which digit count you can adjust. Right now i would use the tool Advanced Renamer for that. Let's say you export 11 files and fill in with zeroes for 4 digits. The files should be exported and renamed as sorted within the thumbnail view and be named: 0001, 0002, 0003, 0004, 0005, 0006, 0007, 0008, 0009, 0010, 0011 Would be nice for further processing like creating PDFs with sorted content and stuff. Thanks, cya!
I had a good week. I fixed an issue with import folders and improved a bunch of UI quality of life. The release should be as normal tomorrow. There will also be a 'future' build for advanced Windows users to try out.
>>16471 >So maybe there should be a checkbox somewhere (export files window or options) to allow renaming with the already existing pattern shortcuts that are in the 'export files' window when doing drag & drop. There is. All files I drag and drop export retain have their filenames set to whatever the the filename:tag is. I also have all manual imports automatically turn their previous filename into the filename:tag. Only my subscriptions use the hash for the filename when importing and exporting. See pic related. Make sure you check the Discord fix, because it's not just for dicksword, and should really be renamed. You can put a variety of things in that textbox too, including patterned auto-naming like those numbers you want
(22.05 KB 883x760 satisfied horse noises.png)

>>16474 That's cool. Thanks anon.
>>16474 Thx. I actually didn't check the checkbox above the filename pattern box, which needs to be checked to make the rename work with drag n drop. It says it works for <=25 files. Even though for me it works for 50 files until the {#} numbers won't show with drag n drop, it might be too few for certain situations so you have to use the right-click export feature. But better than not to have it for sure. The Discord fix one says in the tooltip that it is potentially dangerous and could move your files. I'm not using Discord anyway so i think i can let it deactivated, or is it necessary for something else than Discord? Also how can i rename to the numbers i want? What to enter to fill in with zeroes i described earlier (0001, 0002 etc.)? And if thats possible, how to change the digit length (0000001 etc.) ? I don't think thats possible.
>>16476 >The Discord fix one says in the tooltip that it is potentially dangerous and could move your files. I'm not using Discord anyway so i think i can let it deactivated, or is it necessary for something else than Discord? Drag and drop auto-renaming to the reply box on 8moe has never worked without it for me. That's the whole reason I ever found the feature. >Also how can i rename to the numbers i want? What to enter to fill in with zeroes i described earlier (0001, 0002 etc.)? And if thats possible, how to change the digit length (0000001 etc.) ? I don't think thats possible. I swore it was possible, but I can't recall the formatting.
>>16477 >Drag and drop auto-renaming to the reply box on 8moe has never worked without it for me. That's the whole reason I ever found the feature. Ok. The tooltip of that checkbox says it might move files if the drop destination will consume the files. So for webbrowser stuff it should be good but for drag n drop in your own file explorer, it might move them, as i understand it. >I swore it was possible, but I can't recall the formatting. It's ok, maybe Hydev can answer this. Also i checked again if there is a sorting problem without filling in zeroes when i create PDFs with irfanview. It seems the number can be sorted correctly. So it was not that, that had problems without filled in zeroes. Maybe i just find filled in zeroes nicer and wanted to save the step of putting files into Advanced Renamer, i don't know anymore. It's not that important at the moment :)
https://www.youtube.com/watch?v=Q8Jbel4M4Uc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v597/Hydrus.Network.597.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v597/Hydrus.Network.597.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v597/Hydrus.Network.597.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v597/Hydrus.Network.597.-.Linux.-.Executable.tar.zst I had a good week. There's some more bug fixes and improvements to quality of life. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights When I sped up subscription and downloader load time a couple weeks ago, I messed something up with local import folders that caused them to one-time re-check of any files they were remembering (i.e. entries where you said 'leave alone, do not reattempt'). It resulted in some wasted work and an UI bug. Thank you for the reports about this. I have fixed the problem, and any duplicate/redundant import objects should be removed on update. Let me know if you have any more trouble! I worked a bit more on copying tag parents, which wasn't has helpful IRL as I expected. I've moved the default 'copy' behaviour (e.g. when you hit Ctrl+C) back to the old 'just copy without parents', and now the 'copy' menu has a new option for specifically copying with parents if there are any. I also extended support for this to more taglist types and removed some accidental parent-indent garbage in the copy code. Mr Bones, the export files window, and the manage times, urls, and notes dialogs now all have 'frame location' entries under options->gui. If you always want 'edit notes' to appear on a second monitor or something, you can now set it up. If you are an advanced user that does parsing stuff, all formulae now have an option and purely descriptive 'name/description' field. Feel free to start naming the more obscure parts of ZIPPERs or whatever you are working with. Also, the 'edit ZIPPER' panel, the main 'edit formula' panel (where you can change formula type), and the 'edit string processor' panel all now have import/export/duplicate buttons! Should be a bit easier to copy complicated regex and stuff around. Win 7 I am not totally sure, but it looks like we lost Win 7 support back in v582 or so. Some of the libraries we use are getting trickier to build on Win 7, and some newer hydrus code simply will not run on Python 3.8, which is the latest Windows 7 can run. I have updated the 'running from source' help to talk specifically about this. We knew this train was coming, and it looks like it is suddenly here. Windows 7 users are stuck on source ~v582 until they update Windows or move to Linux/macOS. Thanks for using hydrus! future build Only for advanced users! I am making another future build this week, but just for Windows. This is a special build with libraries that I would like advanced users to test out so I know they are safe to fold into the normal release. It is newer SQLite and mpv dlls this time. More info in the post here: https://github.com/hydrusnetwork/hydrus/releases/tag/v597-future-1 next week I have six weeks of work left before Christmas. I am not confident I will get duplicates auto-resolution working in time, but it'd be nice, and I'm having a great time doing code cleanup and widget overhaul along with it, so I'll keep on trucking like this until the end of the year, I think. So, more small jobs and overall cleanup.
>>16478 >So for webbrowser stuff it should be good but for drag n drop in your own file explorer, it might move them, as i understand it. Hasn't done so for me.
>>16480 >music Good stuff.
>>16480 > 'name/description' field. > import/export/duplicate buttons! Yay!
babby's first parser
>>16480 Future build v587, windows server 2022. Getting some artifacts in the viewer on a webm. Sometimes the screen is just black, sometimes it has the grid lines overlay. Don't see them when opening the file in an external mpv. File import wizard seems slower, but that's purely subjective.
>>16486 Meant future build v597.
Same anon >>16486 On upgrade to the regular v597, got the following error. 597, 2024-11-07 09:37:14: ==== Exception ==== DBException: AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Database Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 624, in _ProcessJob File "hydrus\client\db\ClientDB.py", line 11771, in _Write File "hydrus\client\db\ClientDBSerialisable.py", line 599, in SetJSONDump File "hydrus\core\HydrusSerialisable.py", line 421, in GetSerialisableTuple File "hydrus\client\importing\ClientImportLocal.py", line 721, in _GetSerialisableInfo File "hydrus\core\HydrusSerialisable.py", line 291, in GetSerialisableTuple File "hydrus\client\importing\ClientImportFileSeeds.py", line 2477, in _GetSerialisableInfo AttributeError: 'list' object has no attribute 'GetSerialisableTuple' ==== Traceback ==== Traceback (most recent call last): File "hydrus\client\importing\ClientImportLocal.py", line 1472, in MainLoop File "hydrus\client\importing\ClientImportLocal.py", line 1366, in _DoWork File "hydrus\client\importing\ClientImportLocal.py", line 1204, in DoWork File "hydrus\core\HydrusController.py", line 959, in WriteSynchronous File "hydrus\core\HydrusController.py", line 247, in _Write File "hydrus\core\HydrusDB.py", line 1006, in Write File "hydrus\core\HydrusDBBase.py", line 327, in GetResult hydrus.core.HydrusExceptions.DBException: AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Database Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 624, in _ProcessJob File "hydrus\client\db\ClientDB.py", line 11771, in _Write File "hydrus\client\db\ClientDBSerialisable.py", line 599, in SetJSONDump File "hydrus\core\HydrusSerialisable.py", line 421, in GetSerialisableTuple File "hydrus\client\importing\ClientImportLocal.py", line 721, in _GetSerialisableInfo File "hydrus\core\HydrusSerialisable.py", line 291, in GetSerialisableTuple File "hydrus\client\importing\ClientImportFileSeeds.py", line 2477, in _GetSerialisableInfo AttributeError: 'list' object has no attribute 'GetSerialisableTuple' ==== Stack ==== File "threading.py", line 1002, in _bootstrap File "threading.py", line 1045, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 451, in run File "hydrus\client\importing\ClientImportLocal.py", line 1478, in MainLoop File "hydrus\core\HydrusData.py", line 358, in PrintException File "hydrus\core\HydrusData.py", line 389, in PrintExceptionTuple ===== End ===== v597, 2024-11-07 09:37:15: There was an unexpected problem during import folders work! They will not run again this boot. A full traceback of this error should be written to the log. AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Database Traceback (most recent call last):
[Expand Post] File "hydrus\core\HydrusDB.py", line 624, in _ProcessJob File "hydrus\client\db\ClientDB.py", line 11771, in _Write File "hydrus\client\db\ClientDBSerialisable.py", line 599, in SetJSONDump File "hydrus\core\HydrusSerialisable.py", line 421, in GetSerialisableTuple File "hydrus\client\importing\ClientImportLocal.py", line 721, in _GetSerialisableInfo File "hydrus\core\HydrusSerialisable.py", line 291, in GetSerialisableTuple File "hydrus\client\importing\ClientImportFileSeeds.py", line 2477, in _GetSerialisableInfo AttributeError: 'list' object has no attribute 'GetSerialisableTuple'
>>16450 Ok, I put it on github: https://github.com/dark-edgelord/hydrus-process-ai-metadata I wonder if the guy from the last thread I told I'm making this is still around.
>>16445 >>>16423 >Ah, yeah, see if you can change the search context to 'all my files'. I do it every time.
>>16445 >and I broadly, some years from now, expect to implement some sort of retroactive fill-in or sync for content merge across duplicate groups. So--if B gets new metadata today, A won't get it now, but it might get it retroactively in the future. Thanks, that's what I was asking about. It is currently difficult to do it because of the very deep menu.
>>16489 I can confirm it works with stable diffusion exports, and that it doesn't work on Tavern cards, not that it's supposed to. having to copy and paste hashes into files is cumbersome, especially if it's to be done often; it'd be much better served as a drag 'n' drop that reads those hashes, but I have no idea how to do that through python.
>>16492 Yeah, I have no idea what else could be done or if I would even be able to do it, reading hashes from a text file felt like the best option. You could probably make some kind of a gui where you drag and drop files from hydrus and it just reads the filenames, but having to launch a gui for that sounds just as annoying and the amount of work it would require doesn't sound worth the small improvement. And it also wouldn't work for people who use custom filenames when drag and dropping outside hydrus. And I didn't want to do it like the other script, where it reads the metadata as it imports files from a folder. You can read my reasoning at the bottom of the readme.
A bunch of my gelbooru subs got this error: v597, win32, frozen DBException AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Traceback (most recent call last): File "hydrus\client\importing\ClientImportSubscriptions.py", line 1669, in Sync File "hydrus\client\importing\ClientImportSubscriptions.py", line 340, in _SyncQueries File "hydrus\core\HydrusController.py", line 959, in WriteSynchronous File "hydrus\core\HydrusController.py", line 247, in _Write File "hydrus\core\HydrusDB.py", line 1006, in Write File "hydrus\core\HydrusDBBase.py", line 327, in GetResult hydrus.core.HydrusExceptions.DBException: AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Database Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 624, in _ProcessJob File "hydrus\client\db\ClientDB.py", line 11771, in _Write File "hydrus\client\db\ClientDBSerialisable.py", line 599, in SetJSONDump File "hydrus\core\HydrusSerialisable.py", line 421, in GetSerialisableTuple File "hydrus\client\importing\ClientImportSubscriptionQuery.py", line 44, in _GetSerialisableInfo File "hydrus\core\HydrusSerialisable.py", line 291, in GetSerialisableTuple File "hydrus\client\importing\ClientImportFileSeeds.py", line 2477, in _GetSerialisableInfo AttributeError: 'list' object has no attribute 'GetSerialisableTuple' Database Traceback (most recent call last): File "hydrus\core\HydrusDB.py", line 624, in _ProcessJob File "hydrus\client\db\ClientDB.py", line 11771, in _Write File "hydrus\client\db\ClientDBSerialisable.py", line 599, in SetJSONDump File "hydrus\core\HydrusSerialisable.py", line 421, in GetSerialisableTuple File "hydrus\client\importing\ClientImportSubscriptionQuery.py", line 44, in _GetSerialisableInfo File "hydrus\core\HydrusSerialisable.py", line 291, in GetSerialisableTuple File "hydrus\client\importing\ClientImportFileSeeds.py", line 2477, in _GetSerialisableInfo AttributeError: 'list' object has no attribute 'GetSerialisableTuple'
>>16488 >>16494 Hey, I am sorry for this! I screwed up something with the file log dedupe tech last week. I built a whole test to check my fix was working, but it wasn't good enough. Nothing is really 'getting broken' here, the importers just aren't saving back correct and the whole system is stopping to stay safe. It is fixed for next week, and on master branch now if you run from source (so, in which case, just git pull as normal and you should be fixed now). An odd v597 fix for right now, I believe, is to go into the affected 'file log' of the import folder or subscription and remove any one item, then ok the dialog to save it all back.
(6.13 KB 512x167 e621 pool search.png)

My shitty e621 pool downloader is importing images in reverse order in the resulting gallery page since a recent update. Any way to fix this as I was relying on the old behavior to give them page tags?
(452.81 KB 2166x3000 nopony-rfr59.png)

>>16485 >still claiming to be based >banned every polack on sight
>>16450 >>16451 >>16466 Thanks, looks great! I will refer to this as I poke around on my side of things. >>16455 It is a good bit more efficient to search at 12, which will naturally get all the <12 as it works, but it isn't a huge huge deal. It will make your list of potential duplicates huge so maybe it will slow some search down as you focus on 10 stuff first. If you already have 600,000+ potential duplicate pairs, I'd hit the brake pedal for now. I'll note that I have not seen much use in searching at greater than 8, which is why hydrus defaults cap out there (speculative, I think I call it?). Can you say how you have found 10- and now 12-distance pairs? Does it produce a lot of pairs overall, or not that many, now you have, presumably, cleared your <=8 distance ones? Are you seeing many false positives in the 10-12 range? >>16457 No worries. If you haven't hit it before, check out tags->migrate tags, which may be able to do your merge in one job. Let me know if you run into any trouble! >>16458 Can you give me a more concrete example of it not working here? I am not sure I understand. When I click the flip buttons, it all changes and saves correctly through a dialog ok, and if I click the 'reset' stuff, the saved sizes/coordinates change to None and the respective dialog resets on its next load to its default size or position based on its 'parent gravity' and 'appear on top-left/center of parent' settings. For the system limit, I don't have a good way of saving, like, 'what you had it set to before' on my various 'noneable' controls--I either save the value or 'none', and so when any dialog spawns on a None value, I generally stick the options default or a 'sounds good' value in as the default for the integer if the user decides to uncheck the None. There isn't a nice solution here, but in future I may be able to migrate to an options system that remembers what the integer was before if it is currently None, and we'll have better memory here. I think 10k files is a good default vaule here that works for most users who need this. >>16469 >>16470 Thanks for mentioning this. I will highlight this better in the getting started help to help future users. >>16476 You don't need the 'secret' discord fix to make the pattern renaming work, only the 'normal' discord fix. You only need that if the normal discord drag and drop checkbox doesn't fix discord. The 'secret' thing sets a 'move' flag instead of a copy flag, it is a dumb permission thing, and if you don't know you need it you should turn it off. I can't rename files without first exporting them to your temp dir, which is why you need the first thing checked. >>16477 >>16478 Check the normal export files dialog (off the thumbnail->share menu) to play with the export pattern rules live. It isn't very clever, and I don't like it and want to improve it a LOT. I wanted to use the shit in options->tag presentation that edits the thumbnail banners, but it didn't work out for some reasons. I think it'll need to be a richer object that can deal with conditionals like 'if there are character tags, add " - (list of characters)". I don't think you can do zero-backfilling atm. Maybe I should just let users put in a line of python formatting tbh, maybe as an advanced option. I'm interested in your thoughts on what you'd like here in a future update.
>>16481 You have to hold down Alt before the DnD starts to activate the secret mode, just to ensure users don't accidentally do a DnD to a normal folder and move the files out of their file store. I don't suggest attempting it, since even if it does work, you just won a headache wew. That said, if you also have the 'copy to temp folder before DnD', I guess it only sets the move flag on the temp files so no big deal. Or maybe I disable the secret mode if you do the temp folder thing. I dunno, I haven't touched that thing in several years. I'll look into it and clean it up. >>16485 Well done! I'm well aware of how clunky the parsing UI is, so what was the most annoying thing in the UI to wrestle with; what could I do to have made this process nicer? >>16486 Thank you for this report! If it isn't a pain, can you test that exact same webm on the normal v597 with the old mpv dll? The mpv program is different to the 'libmpv' dll we use in a couple of unusual ways, so it isn't always a helpful comparison. Are most webms or gifs ok in the future build, or are the gridlines common? Have you ever seen these artifacts before on Windows Server? >>16490 I am sorry, I think I have misunderstood something. When you say '"downloaders processing" is selected' in >>16423, what do you mean? I thought you were setting the file domain to that, and thus anything outside of that domain was not being searched, but if you set the file domain to 'all my files', what is being set as 'downloaders processing'? >>16496 I am sorry for the trouble. You can force things to reverse by going into the formula (probably of your URL content parser) and clicking the 'string processing' button, and adding a 'String Sorter', and set the sort type to 'reverse'. I am afraid I do not remember changing anything in the parsing system that would affect sort in recent weeks, but if you discover more about this problem as you poke around, please let me know. Is there any chance that they started putting that data in reverse order in their HTML or whatever? Or, now I think of it, is there any chance that the data here could be affected by zero-padding or 'human' (2 < 11) rather than 'lexicographic' (11 < 2) sort? I think I changed some sort stuff somewhere to handle 2 < 11 better, but I think it was in tags, not the parsing system.
>>16502 >Can you give me a more concrete example of it not working here? I am not sure I understand. When I click the flip buttons, it all changes and saves correctly through a dialog ok, and if I click the 'reset' stuff, the saved sizes/coordinates change to None and the respective dialog resets on its next load to its default size or position based on its 'parent gravity' and 'appear on top-left/center of parent' settings. Oh now i understand. I thought the 'reset last size/position' buttons would reset the 'remember size/position' True/False values to whatever default values hydrus installed with. But they reset what they are saying to reset. My mistake. >I think 10k files is a good default vaule here My mistake again! I thought the new default value would be 100 million. I overread the 'max' part. That means i just can't put in anything higher than that (i just tried it), got it! I thought already, that the jump from 10k to 100 million seems bit high lol. >I can't rename files without first exporting them to your temp dir, which is why you need the first thing checked. Ah understood. In case anybody worries about data privacy or security: does that mean that if you only use Hydrus on an external drive and use that option, files you DnD from Hydrus into a folder on the same external drive, it first gets copied to C: then renamed, copied back and then deleted from C: with a potential recovery possible? Would be very inefficient it seems, specially for big files, but i didn't try it (coz i don't have an external install at the moment, but im interested in the answer). Or does the 'temp dir' stay on the external drive somewhere in the Hydrus folder, or where exactly? Thank you for clarification already!
>>16502 >I'll note that I have not seen much use in searching at greater than 8, which is why hydrus defaults cap out there (speculative, I think I call it?). Can you say how you have found 10- and now 12-distance pairs? Does it produce a lot of pairs overall, or not that many, now you have, presumably, cleared your <=8 distance ones? Are you seeing many false positives in the 10-12 range? I think all 12-distance are false positives. I only search for duplicates of specific files, to get them tagged and to mark a file from one source as already in another source, or when, like with DeviantArt, the highest quality can only be downloaded manually. Maybe I don't need 12. I hoped it to possibly find sketches and finished drawings, and sometimes it seems that duplicates are not found, but it is hard to check because of the deep menus and the correct file service not always being selected. I do have 260k potential pairs at 10 and 654k at 12.
>>16503 >>>16490 >I am sorry, I think I have misunderstood something. When you say '"downloaders processing" is selected' in >>16423, what do you mean? I thought you were setting the file domain to that, and thus anything outside of that domain was not being searched, but if you set the file domain to 'all my files', what is being set as 'downloaders processing'? I mean, I HAVE to do it every time, but sometimes I forget, because I run the duplicate filter on particular files, which I select on a page that can have one or another domain scope.
.>16503 >>16486 >Thank you for this report! If it isn't a pain, can you test that exact same webm on the normal v597 with the old mpv dll? The mpv program is different to the 'libmpv' dll we use in a couple of unusual ways, so it isn't always a helpful comparison. Are most webms or gifs ok in the future build, or are the gridlines common? Have you ever seen these artifacts before on Windows Server? I can't easily find that exact webm again. :/ I can say that I had issues with a bunch of diff webms in the future build with the grid lines, but not on gifs. It would be intermittent as well, I could cursor between webm and image, and it would sometimes have grid lines, sometimes be black, and sometimes be OK. I have had no issues with video in the past 20 or more regular builds, I upgrade fairly regularly.
>>16502 >search distances 10 and 12 Distance 8 is definitely the right choice for the default max distance. I've found that search Distance 8 is very accurate, but it does not quite always find everything. Distance 10 is like 99% unrelated false positives, but it does occasionally turn up a pair of related files in my ~600k database. Usually alternates in an image set. I process duplicates in stages, starting at Distance 0, then going up to 2, 4, 6, 8, and lastly 10. Doing it this seems to be more time efficient to me. I've gone through roughly 100k pairs at Distance 10. The main issue with Distance 10 is that the algorithm kind of breaks down and goes insane once all the Distance 8 and lower duplicates are already done. The vast, VAST majority of duplicates found at 10 have no conceivable reason as to why they would be considered duplicates at all. But it does sometimes turn up positive results. Processing them is quick because the "duplicates" are so dissimilar. Pic related are a few Distance 10 "duplicates" to each other. Going from Distance 8 to Distance 10 also added several hundred thousand duplicates. It's a lot of processing for very little return. Have not tried Distance 12 yet, but I've spent a lot of time working with Distance 10, and I can't imagine it would pick up much that would be worth the time to scan everything at that distance and then manually comb though. It almost certainly would be virtually 100% false positives.
>>16512 I work with a much smaller set of files and use distance 12. It still misses some things that seem like they would be obvious, including color palette swaps.
>>16515 Here are three pairs detected at distance 12, but not 10.
>>16516 Ultimately I think it comes down to making the tool fit the need. Obviously if you're processing hundreds of thousands of files, those farther distances have too much noise in them. I process batches of 200-400 files at a time, and have around 28,000 image files, and duplicate pair processing usually doesn't take me long, though obvious false dupes are about equally common as dupes and alts.
>>16509 Thank you, this is very helpful. The mpv dll I tried in the future build was really quite new, specifically it was this one: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20241020-git-37159a8.7z You can see the date in the URL, it is just from a few week sago. I am afraid I am going to ask you to do some homework for me. Can you please go here: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/ And scroll down a bit and try some other dlls on your future build test extract and tell me when the gridlines go away for you? The test process will be: - load up a search in your test client that has a webm you know renders bad - close the client - select an x86_64 version (not x86_64-v3) mpv date to try (or pick a link from below, starting from the earliest) - delete the current 'mpv-2.dll' in your future test install dir - extract the 'libmpv-2.dll' to your install dir and rename it mpv-2.dll - boot the client, test the webms Here are some links, going forward a month at a time: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20240623-git-265056f.7z/download https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20240721-git-e509ec0.7z/download https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20240818-git-a3baf94.7z/download https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20240915-git-ef19a4a.7z/download Any good? I do not need you to identify exactly which release marks the break/work point for you, but if we discover that something in 2024-07 works ok, then I can fold that in instead and feel safer about it all. I don't like to upgrade to the bleeding edge and am comfortable going back a few months, but I think we are on 2023-10 right now, and I understand it has some gif problems in any case, so I do want to update it. If 2024-06 doesn't work, that's a bit annoying. If you are willing to put the time in, please go search that download page for roughly the earliest that is good. If nothing in 2024 works for you, but it all seems to work fine on normal Windows, I may have to move up anyway and instruct people on less typical OSes to manually swap out the dll for an early one or run from source (where you do this step yourself anyway) or similar. I presume that Windows Server has some different media dlls to normal Windows, perhaps some .NET media thing, and your OS hasn't had the update yet--or a similar situation with GPU drivers. Let me know how you get on!
I had a great week. I cleaned a ton of code and fixed the recent serialisation/saving issue some users had with import folders or subscriptions. Large import pages work faster, and clients with many tag services have a new way of displaying them. The release should be as normal tomorrow.
(8.01 KB 314x180 Screenshot (40).png)

Is there a way for Hydrus to do infinite scroll to where it doesn't load images until I scroll down to a certain point. Limit searching is fine but from what I noticed, you have to always apply a new limit search each time you make it to the end. Infinite scroll would help a lot when trying to view too many images at once. My biggest issue is that doing searching with limits and any kind of searching is only for "file searching" page. Galley and url downloading pages needs some kind of limit as well. I'm one of the poor souls who uses pixiv a lot and from what I noticed, a single artist has too much shit ranging in the thousands. Those images sets are a Hydrus killer, some of them can have up to over a hundred images. This isn't even an AI only thing either, they've always been like this. I see why boorus don't like to fully scrap from pixiv. These guys are pushing out too much, its insane. I stumbled upon pic related from 1(ONE) artist without actually looking in the manga section and now my Hyrus has been stuck fetching images for nearly a week and not even close to being finish. I'm a start calling these a gallery sink hole because they are common on that site. Obviously I don't want to have to load 10k images when I go to check on my downloads but some kind of limit would help.
>>16520 Yeah, Pixiv artist like to do the occasional compilation dump of dozens of unrelated images. Usually without bothering to tag anything. No idea why, just what they do. I primarily rely on bookmarking posts and and following artists inside Pixiv, rather than Hydrus subscriptions for that very reason. I just dump all my bookmarks' URLs into to a Hydrus URL downloader occasionally. Pixiv is great for discovery, awful for archiving.
>>16520 Why don't you make it a subscription instead of a downloader? You can set a limit for that, and the default is 100. If you hit the limit on a subscription, it will prompt you to make a downloader to cover what it couldn't get. You can always just immediately disable the subscription after the first check if you wanted to to be a one time downloader.
>>16522 you can set limits for downloaders too
https://www.youtube.com/watch?v=ERHk4wPOPKw windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v598/Hydrus.Network.598.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v598/Hydrus.Network.598.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v598/Hydrus.Network.598.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v598/Hydrus.Network.598.-.Linux.-.Executable.tar.zst I had a great week fixing bugs and cleaning code. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html fixing some mistakes First off, I apologise to those who were hit by the 'serialisation' problems where certain importers were not saving correctly. I screwed up my import folder deduplication code last week; I had a test to make sure the deduplication transformation worked, but the test missed that some importers were not saving correctly afterwards. If you were hit by an import folder, subscription, or downloader page that would not save, this is now completely fixed. Nothing was damaged (it just could not save new work), and you do not have to do anything, so please just unpause anything that was paused and you should return to normal. I hate having these errors, which are basically just a typo, so I have rejigged my testing regime to explicitly check for this with all my weekly changes. I hope it will not happen again, or at least not so stupidly. Let me know if you have any more trouble! Relatedly, I went on a code-cleaning binge this week and hammered out a couple hundred 'linting' (code-checking) warnings, and found a handful of small true-positive problems in the mess. I've cleared out a whole haystack here, and I am determined to keep it clean, so future needles should stick out. other stuff I moved around a bunch of the checkboxes in the options dialog. Stuff that was in the options->tags and options->search pages is separated into file search, tag editing, and tag autocomplete tabs. The drag and drop options are also overhauled and moved to a new options->exporting page. I rewrote the main 'ListBook' widget that the options dialog uses (where you have a list on the left that chooses panels on the right). If you have many tag services and they do not fit with the normal tabbed notebook, then under the new options->tag editing, you can now set to convert all tag service dialogs to use a ListBook instead. Everything works the same, it is just a different shape of widget. A page that has no files selected now only uses the first n files (default 4096) to compute its 'selection tags' list when there are no files selected. This saves a bunch of update CPU time on big pages, particularly if you are looking at a big importer page that is continuously adding new files. You can change the n, including removing it entirely, under options->tag presentation. If you are an advanced downloader maker, 'subsidiary page parsers' are now import/export/duplicate-able under the parsing UI. job listing I was recently contacted by a recruiter at Spellbrush, which is a research firm training AI models to produce anime characters, and now looking to get into games. I cannot apply for IRL reasons, and I am happy working on hydrus, but I talked with the guy and he was sensible and professional and understood the culture. There are several anime-fluent programmers in the hydrus community, so I offered to put the listings up on my weekly post today. If you have some experience and are interested in getting paid to do this, please check it out: Spellbrush design and train the diffusion models powering both nijijourney and midjourney -- some of the largest-parameter count diffusion models in the world, with a unique focus on anime-style aesthetics. Our team is one of the strongest in the world, many of whom graduated from top universities like MIT and Harvard, worked on AI research at companies like Tencent, Google Deepmind, and Meta, and we have two international math olympiad medalists on our team. We're looking for a generalist engineer to help us with various projects from architecting and building out our GPU orchestrator, to managing our data pipelines. We have one of the largest GPU inference clusters in the world outside of FAANG, spanning multiple physical datacenters. There's no shortage of interesting distributed systems and data challenges to solve when generating anime images at our scale. Please note that this is not a remote role. We will sponsor work visas to Tokyo or San Francisco if necessary! Software Engineer https://jobs.ashbyhq.com/spellbrush/550b3de6-2c6d-4a80-aa3b-a530b6e48464 AI Infra Engineer https://jobs.ashbyhq.com/spellbrush/55633abd-f242-4e43-b390-4508d7bb65ea next week I did not find time for much duplicates auto-resolution work this week, so back to that.
(345.85 KB 702x658 075d4.png)

>>16524 >We will sponsor work visas to Tokyo or San Francisco if necessary! Eww! The job offer is tempting, the prospect of foreign multi-racial team is not, I guess it has to do with the alien nature of the money funding the enterprise.
>>16524 Running from source on Manjaro Linux. I updated from v596 to 598 and MPV cannot be found. I refreshed the Venv using the (N)ew and also (T)est MPV options, but no success so far. By the way, this Manjaro install is a fresh one and it is possible some system libraries are not longer present.
>>16529 Same anon here. I fixed it. Re-reading my post >>16529 about missing system libraries I had a light-bulb moment. In the previous Manjaro install, the MPV player was installed and checking its libraries I found it has a MPV.SO one. So, I installed it and now the MPV viewer in Hydrus is working as expected.
>>16522 I don't think that's what I'm looking for. I'll be honest, I rarely mess with subscriptions but from what little I use it, it'd still be pointless. Hyrdus picks up each image and image set based on url so if I tell hydrus to grab the first 5 urls I'd get something like this on pixiv >url 1 [single image] >url 2 [single image] >url 3 [image set of 10 images] >url 4 [image set of 10 images] >url 5 [image set of 100 images] Hyrus will look at that as 5 urls but each url containing multiple images. So if there's 10 urls like [image set of 100 images], that's already pushing 1,000 images, which is what I've been seeing on that site. I don't think there's anything you can do about this however, I feel like I'm going off topic from what originally I want. I just want hydrus to not fetch and display 10k images all at once when I go to check on my gallery and url downloads. I still want to download them just not waste resource trying to display them all at once. A display limit like with the search page but for gallery downloader is what I want. That's why I suggested an infinite scroll type function. You scroll down to a point then Hydrus would fetch another batch where you could set how many it can fetch. I used to hate infinite scrolling on most sites but for Hydrus it makes the most sense.
>>16518 I tried about 10 different libmpv dlls, but none of them resolved the issue. Some .dlls showed grid lines, others just gave an error in hydrus. example:20240304 v597, win32, frozen ValueError ('Invalid value for mpv parameter', -4, (<MpvHandle object at 0x0000019619A63E50>, <mpv.LP_MpvNode object at 0x000001961B41AA50>, <mpv.LP_MpvNode object at 0x000001961B41A9D0>)) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1637, in SetMedia File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1294, in SetMedia File "hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 2214, in SetMedia File "hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 1587, in _MakeMediaWindow File "hydrus\client\gui\ClientGUI.py", line 7518, in GetMPVWidget File "hydrus\client\gui\canvas\ClientGUIMPV.py", line 293, in init File "mpv.py", line 1338, in loadfile File "mpv.py", line 1229, in command File "mpv.py", line 142, in raise_for_ec I need to correct myself, both animated gifs and webm ARE affected.
(19.74 KB 477x1100 hydrus_client_HayDdaKoLE.png)

I doubt this is supposed to happen.
(1.03 MB 400x400 no_bra.gif)

For some reason this file is playing really really fast on Hydrus 596. It didn't always do that.
Is redgifs download fucked? They seem to have changed something and when I try to open the link in browser and view source and visit what appears to be the file link I get blocked. >>16538 I've noticed this on a few files as well, including a few webms. Can fish some out if needed
>>16535 Wanted to report this visual bug too, found on all three tabs of a new 'duplicates processing' page. Also Hydrus nas noticed me that 'meta:contentious content' was blacklisted for pending tags to PTR it seems. Just to understand it right, it does mean that this tag doesn't get uploaded to PTR anymore i guess? What is the reason? Is it because 'contentious content' has no real definition and is defined different by everyone? What about the 778k files in PTR that have that tag already. Will they keep it or lose it in the future?
>>16538 https://onlinegiftools.com/change-gif-speed Put in the 'fast' gif and check the frame delays. As you can see, the info says: Total frames: 64 Input GIF duration: 0.86s All Input GIF Delays: Frame 1: 70ms Frames 2-22, 24-37, 39-63: 10ms Frames 23, 38: 50ms Frame 64: 90ms Afaik: that means that the 'fast' playback that you are complaining about, seems to be the real playback speed. Most of the frames play at 10ms. They are set up like that. The question would be, what is the intended speed of the creator and what tools were used to create the gif with those custom delays? I think intended would be 100ms instead of 10ms for the frames that have this delay (Frames 2-22, 24-37, 39-63), looks correct. Some mistake must have happened creating the gif. Players/browsers (or websites?) that play it slower, have a minimum frame delay for each frame (like 100ms) and/or they cannot play custom delays for individual frames and therefore play it wrong. Cannot pinpoint which is it in chrome for example (also not how it was in older Hydrus versions). Seems to be something around 95ms weirdly enough. Maybe thats the average of all individual frame delays. Now you will find probably more gifs that play fast. My imageviewer 'irfanview' plays them also fast, therefore correctly i guess. Not sure why .webms should change though like the other guy suggested, but i cannot comment on that. You can save the new gif but of course the hash changes and also filesize ever so slightly. But better wait till Hydev answers, he should know better what to do.
>>16541 >Maybe thats the average of all individual frame delays. I mean average if the 10ms times would be discarded because they are too fast and they would be exchanged for 100ms frames or something like that.
>>16538 >>16539 >>16541 Okay, I tried to attach some webms that play fast in the native viewer yet fine in external mpv but 8chan says it's a format not supported by the site. Bizzarely enough, `file` says that at least the one I tested is an EBML file with the creator \004webm. I've never heard of that, apparently it has something to do with .mkv? I'm guessing whoever is encoding these is fucking them up somehow. The artist sky_necko on e621 has a high rate of fast webms, about 3/4 of the ones I have saved play fast. Here's an example: https://e621.net/posts/5114166 Obviously furshit so don't click if you're triggered by that.
Random suggestion: search history that persists through restarts.
>>16504 >Ah understood. In case anybody worries about data privacy or security: does that mean that if you only use Hydrus on an external drive and use that option, files you DnD from Hydrus into a folder on the same external drive, it first gets copied to C: then renamed, copied back and then deleted from C: with a potential recovery possible? For temp dir privacy, I think the bulletpoints are: - on boot, hydrus creates a new directory in your temp dir and does all its work in there. SQLite spools its larger transaction stuff to there too - on a clean program exit, the directory is deleted - on a non-clean program exit (i.e. a crash), the directory is not deleted and it is now up to your OS (or you) to eventually get to it - every import file travels through this temp dir. it is deleted moments after the import is finished - every small drag and drop with that compatibility mode exports the files to a new sub-dir inside our temp dir. the change I made last week is to delete this subdir after six hours instead of waiting until program exit. The DnD mode only does the temp dir thing if the DnD is <=50 files and <250MB I think, to keep things snappy if you are moving heavy stuff just from one tab to another. If you cannot trust your temp dir, you can choose a different location using the '--temp_dir' launch switch: https://hydrusnetwork.github.io/hydrus/launch_arguments.html#--temp_dir_temp_dir I set the new temp location in your environment path very early in boot, so I am pretty sure that everything, including SQLite, will use the new location. You can see your current temp dir in help->about btw, on Windows it'll probably be C:\Users\YOU\AppData\Local\Temp. Anything that hydrus used is prefixed 'hydrus'. >>16505 Thanks, that's interesting. I regret the tangle most of this UI became, with the crazy nested menus. I have many plans to improve it, but there's plenty of behind the scenes work to do first. Fingers crossed, duplicate auto-resolution will be a bit of, and the larger beginning of, a relief. >>16506 >opening files in a new duplicate filter page Ah, thanks, I had forgotten you could do that! I guess that new dupe filter page inherits the previous file context. This sounds like a great place to have a hyper specific checkbox in the options--I'll see what I can do. >>16512 >>16515 >>16516 >>16517 Thanks. I think we can say that the current system is very good at detecting duplicates and sometimes iffy on detecting alternates. Most of my time has been on duplicate handling so far, so I'm mostly happy with this. As for the duplicate ponies, and future alternate handling and costume/WIP detection, I suspect the solution here is going to be to take multiple perceptual hash snapshots of files, in different subsections. For instance, for each of those three pairs, the differences are almost entirely in the top or bottom half of the image, so a phash system that similar data just for the bottom and top halfs of the files would detect these at distance 0 or 2. I understand there are some algorithms that handle this question even more intelligently by drawing bounding boxes around the most interesting content in an image, and these help detect crops, too. My system is totally fine with having multiple phashes per file, so this won't be a super difficult thing to engineer, but we'll need to do lots of testing to make sure we don't overwhelm ourselves with false positives through bad phash section choice. There's more work to do, but I think we'll get there.
>>16520 >>16521 >>16522 Yeah, I broadly agree. I made the decision not to do any booru-style 'pagination' when I first started hydrus, but as we've pushed the boundaries and come to regularly hit 20-30k file pages for certain queries, we hit lag city. I've optimised over and over, but I finally surrendered on the 'selection tags' problem last week by applying the 4096 default limit. The thumbnail grid is unfortunately one of the worst-coded areas of hydrus (and that is saying something!), and is in need of a complete overhaul. It is all brittle ugly hardcode right now, but me and a guy think it may be possible to convert it to entirely Qt widgets, even up to millions of files on a page. This won't solve this 'streaming'/pagination issue, but it will move us to flexible Qt code which will allow the implementation of more dynamic loading far more easily than my ten year old bullshit. Watch this space, but I can't promise anything any time soon. This is one of those back-burner cleanup jobs where I hope to suddenly one day get to it and move us to something nicer without actually changing any of the front-facing features at all, and then I'll be in a position to think about new presentation options, including stuff like a list view instead of thumbnails. As for this >>16533 , yep, unfortunately pixiv is just a bit crazy in ten different ways. I built the downloader, broadly speaking, for the shape of a normal booru, and if you want fine control over a pixiv import stream I think you have to ride the 'pause/play files' button. Either drive the import manually, and set 'skip' on stuff you don't want, or operate on a 'whitelist' where you drag and drop good pixiv URLs onto the client manually, or just surrender to getting spammed with sixty messy variations of a random CG. I fucking hate 'multiple files per post URL' as a hosting concept, but there we go. I can't provide 'scroll down to keep downloading', but surfing the 'pause/play files' button works well if you want to get a broader preview of what an artist has to offer. Also, the bigger artists are all on the normal boorus, where the spammy content is culled, so that's another option--just look their most popular files up on saucenao and find them on a saner site. >>16529 >>16530 Hell yeah, well done--for future users who run from source, what could I add to the 'how to get mpv mate' section of the 'running from source' help? (https://hydrusnetwork.github.io/hydrus/running_from_source.html#built_programs here, Linux tab) I know very little about Linux, but if I said 'You can try checking your package manager for mpv too--if it says it bundles libmpv.so, you can just install it and that will probably work too.', would that sort of thing be correct and sufficient? I know that some builds of the mpv video player do not come with libmpv, but some do. I guess it is a static vs dynamic dll build thing. >>16534 Damn. Thank you very much for testing. I feel like a shit now, since I think I'm going to move forward with the upgrade. We want the new tech for the wider userbase because the new mpv loads smoother and fixes some gif problems, and if a handful of users on more unusual OSes get some black bars, that's at least better than outright import fail errors. I'm going to think a bit more, but I suspect I will fold the new mpv into the normal release and have special instructions for users with problems on either replacing their mpv manually (would be annoying and have to do every week), or moving to running from source (only have to do it once, very flexible to local circumstances). I think I will do it hesitantly--I'll roll it into v599, and if many many other users have black bars or other trouble in the wider test, then I'll roll back in v600. I have to guess it is Windows Server causing the trouble though. I know one guy on Win 10, which was my main worry, who had no problems.
>>16535 >>16540 Aiiiieeeeee! Thank you for this report. I had to fix a bugged multi-column list id last week, and I bet it screwed with your column widths in the new placeholder 'auto-resolution' tab. Please hit that tab, right-click the tab header and say 'reset column widths for "review duplicates auto-resolution rules"' (it should say auto-resolution, not anything to do with export folders. If it says export folders, I think you are on v597, please update to v598). You may need to restart the client to get it to lay out again and fix your fucked panel width. If you cannot see the 'auto-resolution' tab, please turn on help->advanced mode and restart the client. >>16540 For PTR stuff: Yeah, the PTR tag filter just stops your client from uploading that in future. No worries, and you don't have to do anything, just keep on parsing like normal, it is all handled for you. There's a similar tool the jannies use that deletes it from existing content en masse, so I expect it will go from those 778k files in the near future. Booru-specific stuff like 'do not post' and 'tag request' and 'very high res' makes sense on the site but not for our purposes on the PTR, so it is simpler to just block it all than try to alter our parsers to not grab it. I didn't talk to our guys about this recent 'contentious content' block specifically, but I think I'm right in saying that tag specifically refers to booru content policy, to differentiate spicy content that you need to be logged in or whatever to see? It is a tool for booru-side filtering in one way or another, rather per se than an actual descriptor to be used neutrally. If you are interested in seeing the whole current tag filter, it is under services->review services->remote->tag repos->PTR->network sync->tag filter. >>16538 >>16539 >>16548 Thank you! 12fps is my fallback speed for when I cannot figure out a gif duration/framerate, so I bet this guy is parsing wrong. I will look into it. Broken files are always welcome; even if I cannot fix them, they are useful to have around. If a site won't let you post, or the CDN modifies the file and fixes it mid-transit, then you can always zip them up and catbox them to me. And yeah, my native viewer obeys my frame timings, but mpv will ignore what I suggest and do whatever it thinks is best, which is almost universally better than what I figure out. That e621 file looks a little crazy. It has a duration of ~17 seconds in hydrus and Firefox and MPC-HC for me, and it looks like mpv is rendering at the same speed. Hydrus reckons it has 1,691 frames for 100fps, but I don't know if it is lying about the actual number of frames since when I try to tell hydrus's mpv to go forward one frame, it jumps like 6-8. What do you see? When you get a nicely rounded but weird number like '100fps', it is not usual that someone set up a conversion wrong. If we are entering the magical world of mkv muxing, then yeah I expect some human or automatic ripper screwed up a conversion here and the header on the webm is straight-up wrong. >>16549 Man, I really need to figure out a proper search undo. I hate removing a search predicate and then realising I removed the wrong thing. I totally agree about getting a history on search pages, and then attaching it to the session so it persists.
>>16552 >What do you see? I'm getting the same 16.9 seconds, 1691 frames, 100 fps in the bottom bar when it is selected in the gallery. The native viewer plays it all in about 2 seconds and holds the last frame until it 'repeats' at 16.9 seconds. Opening it in mpv (externally) I see an estimated 12.0482 fps. ffmpeg says it's a "matroska,webm" that's 16.91 seconds long but I can't get it to tell me how many frames there are. If it at all helps, the original file is an ugoira. Here's a direct link: https://files.catbox.moe/8ro11c.zip Hydrus shows that ugoria as having 203 frames and is estimated to be 25.4 seconds long, and plays at a more normal speed. I'm on Linux, version 595 if it matters.
Random minor UI nitpick: when you go to import a downloader, the menu only says to drag and drop them on Lain without indicating that you can open a file picker instead by clicking on her or by clicking the clipboard to paste a copied image (though the clipboard at least has a mouseover indicating this). Also, maybe the clipboard could also detect if a path to a downloader is copied instead of the bitmap itself?
>>16553 Yeah it looks like a busted file header. FFMPEG reports 100fps, but I forced my 'distrust fps, read frames manually' thing, and it comes back as 203 frames for 12fps. I've written a thing in my file parser to say 'distrust 100fps mate', so we should catch these in future. I'll schedule a round of metadata for all small video with 100fps on update. The bra gif here >>16538 seems simply not fully readable by PIL, which I use for gif durations. The first frame seems to be 70ms, but those after are 1ms, which is often shorthand for a null result. I expect the frame headers are busted or in an unusual int format or something. Typically in these cases a future version of Pillow will suddenly start reading them correctly. >>16554 Thanks, I will clean this up!
(136.95 KB 900x1200 84658765187.jpg)

Same anon of >>16529 and >>16530 posts and running v598 from source on Manjaro. Hydrus was randomly hanging the whole computer when clicking the small viewer on the lower left corner to pause a video, or, double-clicking a thumbnail to maximize a video, forcing me to hard reset the machine. After some troubleshooting I discovered that the default viewer was set as Qt Media Player instead of the old and dear MPV. So, I reverted it to MPV and the crashes are gone. >>16551 >what could I add to the 'how to get mpv mate' section of the 'running from source' help? Perhaps, just to be on the safe side, the Help Source Page may explain that if the Distro's Package Manager does not list any libmpv library, then it might be worth to try installing the ubiquitous MPV Player, which may have it.
>>16503 >Reverse sorting problem. Playing around it needs to not do any resorting at all from what the HTML provides. The HTML page is in the correct order too. Pools can have non-sequential post IDs which usually happens when the posts are re-uploaded at better quality or the original uploaded didn't do them in order. Pools allow you to adjust the ordering by manually listing the post IDs in the desired order so string sorting the HTML will break the actual sequence. Everything seems to be perfectly fine with the parser and every file is in the correct order under the "file log". It appears that the problem is near the end of the chain when dropping files into the thumbnails page.
>>16552 >reset column widths cheers, this fixed it
What was the trick to downloading 8moe threads again? I sent my session cookie to Hydrus, but attempting to download a thread still hits the splash page.
>>16550 >I understand there are some algorithms that handle this question even more intelligently by drawing bounding boxes around the most interesting content in an image, and these help detect crops, too. My system is totally fine with having multiple phashes per file, so this won't be a super difficult thing to engineer, but we'll need to do lots of testing to make sure we don't overwhelm ourselves with false positives through bad phash section choice. I used the tool AllDup a little bit and its very mighty. It can find similar images with different comparison methods (aHash, pHash etc), finds you files from a ton of different properties etc. Might be a bit overwhelming first, but can't recommend it enough. Check out the documentation, it may help you for your plans with hydrus. https://www.alldup.de/alldup_hilfe/alldup.php Scroll down for some description in english, what it can do. https://www.alldup.de/alldup_hilfe/index.php (Documentation) https://www.alldup.de/alldup_hilfe/search_similar_pictures.php Scroll down for nice tables (recognition rate test) and so on. Maybe it's some help and give you some ideas? 'AllDup is Freeware and can be used free of charge by private users and companies.' Since it finds you all kinds of files, people can use it for finding duplicates before importing into hydrus too.
>>16552 >That e621 file looks a little crazy. It has a duration of ~17 seconds in hydrus and Firefox and MPC-HC for me, and it looks like mpv is rendering at the same speed. Hydrus reckons it has 1,691 frames for 100fps, but I don't know if it is lying about the actual number of frames since when I try to tell hydrus's mpv to go forward one frame, it jumps like 6-8. What do you see? It's the same for me. Checking different webms with MediaInfo i can say, that this one is a special one. This one gives: Format : WebM Format version : Version 2 File size : 5.88 MiB Duration : 16 s 910 ms Overall bit rate : 2 917 kb/s Writing application : whammy Writing library : whammy Video ID : 1 Format : VP8 Codec ID : V_VP8 Bit rate : 2 795 kb/s Width : 500 pixels Height : 500 pixels Display aspect ratio : 1.000 Frame rate mode : Variable Compression mode : Lossy Default : Yes Forced : No Whammy (google: Whammy: A Real Time Javascript WebM Encoder) is unusual. I checked a bunch of files, and all of them are written with Lav. The frame rate mode here is 'variable' which could also play a role in this, but from my webm files, very few were also 'variable' but played correctly in the native player and also mpv. So i guess whammy is the reason, plus potentially the variable frame rate mode in combination.
>>16570 Maybe this helps: >>16408
Hi, is there a way to compare subscription lists to see what subscriptions are missing artists that others have?
>>16574 what I do is copy the queries of both subs I wanna check, paste them into 2 separate files, then I just go to the command line and diff sub-1.txt sub-2.txt
>>16573 I have the correct user agent now, but now Hydrus Companion seems to be failing get any cookies from 8moe.
I had an ok week. I fixed some UI issues, improved how file timestamps display, and optimised database vacuum maintenance. The release should be as normal tomorrow.
The e621 parser component appears to have broken, no longer returns results.
>>16579 I fixed it (for myself) by going to downloader components > parsers > e621 gallery parser > content parsers tab > file page urls content parser (that produces downloadable/pursuable urls) > edit formula > the first search descendants, its key=value changed from class=post-preview to class=thumbnail
https://www.youtube.com/watch?v=UtG11pmBe3U windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v599/Hydrus.Network.599.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v599/Hydrus.Network.599.-.Windows.-.Installer.exe macOS app: I removed the macOS release, it will not boot in some/all situations--it will be fixed for v600! linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v599/Hydrus.Network.599.-.Linux.-.Executable.tar.zst I had an ok week. I fixed some bugs, improved some quality of life, and overhauled vacuum maintenance. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights If you got a superwide duplicates filter page sidebar last week, it should fix itself today. It was an UI bug from a couple weeks ago that wasn't fully cleaned up. The e621 downloader stopped finding files in the past week, but I have fixed it. You don't have to do anything. I understand they may still be making changes on their end, so let me know if anything new breaks. When you see file timestamps in the media viewer's top hover window or the file right-click menu, their tooltips are now the inverse of your 'always show ISO timestamps' setting. So, if it says 'modified: 2 years ago', the tooltip will say 'modified: 2022-11-20 14:23:39', and vice versa! Advanced users only: The database 'vacuum' maintenance task, which is essentially a database defrag, now runs significantly faster (in my tests, maybe 10 times faster, anything from 30-170MB/s, but I suspect super big databases will run ~10MB/s) and no longer needs to use your temp dir. I only recommend running vacuum every, say, five years for a few percent performance improvement, but if you have been waiting to clean up or truncate a huge and tangled client.mappings.db, it should be easier to find the space now. I have also added a summary popup that reports how much space the vacuum saved and how fast it worked. I'd be interested in knowing what speeds you see. next week I have nothing exciting prepared for v600. Just some more general work while I continue to chip away at duplicates auto-resolution in the background. >>16579 Thanks, should be fixed in this release.
Edited last time by hydrus_dev on 11/25/2024 (Mon) 22:27:03.
>>16582 >I have nothing exciting prepared for v600. The question is, what have you prepared for v666? Will we be able to play doom in hydrus?
(498.63 KB 1178x686 reeeee.png)

>>16583 >v666
>>16559 Same anon and an update. It looks like the computer hanging has nothing to do with Hydrus as I already experienced 3 random crashes when launching a video with VLC. So the culprits look like the Qt6 used by Manjaro, the video compositor X11 (i really doubt it), and the most likely guilty party: the open source video driver which is failing to manage properly the video pagination, I mean, it won't release memory by around 500MB, in other words, there is something wrong with the destructors.
Are hydrus downloaders able to make POST requests? a website I use has an api but it uses POST for retrieving images and metadata instead of GET. the website requires javascript (like many websites nowadays) so this would be the only way for me to download from hydrus
>>16582 > I only recommend running vacuum every, say, five years for a few percent performance improvement If you delete lots of files like I do it's worth running more often if just for the file space. I just ran it after about a month and it saved over 800 MB.
Is there any reason why the bottom bar isn't showing import and archive times any more? was this changed in an update? if so, is there a way to put it back?
Is anyone else getting an error when trying to download from e621 ? In v599, the files will fail when downloading from e621 and give a "bad request" error on my hydrus client
>>16593 it doesn't show deletion times and reasons in the bottom bar either. I use that a lot too
I think the downloader for nijie.info may need looking into. I'm getting a lot of "server reported limited bandwidth" after just a small dozen of images and connection fails. Could be their gifs as that's where it keeps giving me those errors. Haven't been to that site in a long time but I just noticed they also use .mp4. If only Pixiv would follow suit and use gifs an mp4s.
>>16597 >If only Pixiv would follow suit and use gifs an mp4s. Better yet, have them re-encode the file each time it's downloaded to make the generation loss even worse!
>>16575 You're seriously knowledgable! Thanks!!
>>16560 I downloaded your png downloader above to try and test this myself, but I'm afraid I ran into two problems: - it is an html parser, and just last week e621 changed their gallery html so it broke - the png's url class points to a json api class, which points to another url class that is not in the package. No worries, but any chance you were working on a json solution amidst all this, and things got mixed up when bundling the downloader? My current e621 gallery parser seems to parse pool HTML URLs ok, so a solution here might just be to add a URL class for https://e621.net/pools/44587 style pool URLs and then add an example URL to the e621 gallery parser. Back to the actual reverse issue: If the file log lists files in the correct order, and obviously processes them in that order, they should arrive in the page one by one in that order. I assume you aren't seeing the thumbnails inserting into position 0 or something crazy? So, I think I misunderstand your issue. If the file log is all good, can you explain your problem another way? >>16571 Thanks--that table of data is really interesting. It looks like we see similar, in that phash works very well except for rotations/flips and excessive crops. I had been imagining producing more phashes for the rotations and flips and interesting crop regions of the image was the solution, and I think ultimately it still would be the ideal, but it seems like average hash would fill in the crop gaps quite well. I will think about this and maybe drum up a test. >>16572 >Javascript WebM Encoder Now this is podracing Thanks, very interesting. I bet you are right that the variable frame rate reports a 100 because this real-time encoder doesn't know what it'll do at the start of the stream or something. Should be fixed in the client now, I just manually count frames. Let me know if you discover any new magic framerates that we should probably distrust. >>16583 Yeah, the kernel-level AI I have scanning your preferences should have learned everything it needs to know by then, so we'll be ready to launch Emerald-16.
>>16585 Not sure if this helps your investigation, but FYI: hydrus does not destroy its mpv windows atm. When you click off a video, or move to another mpv, rather than destroying the old window or re-using the current, it maintains a pool (usually of two or three) unloaded and hidden mpv windows that it summons back into place as needed. This was originally done because mpv caused insta-crashes on destruction when I first developed with it. It might be better now, but I haven't tested it. Anyway, this mpv recycling has caused some trouble on some Window Managers, either the unhiding or the re-parenting to a different top level window. If you are having problems with VLC, perhaps it is unrelated, but if you are seeing some odd memory hanging around after hydrus has done some video stuff, it might be my mpv windows still lurking in the background. >>16559 In regards to this, also. I looked at the Qt Media Player thing and I have no idea how it got set as default. Hydrus does check for whether mpv imported ok on the first boot, and if it doesn't, it is supposed to set the native viewer. Do you happen to remember how you got set to Qt Media Player? Is there any chance you set it by accident, or maybe you opened the 'how to view media' dialog and maaaaybe like the choice dropdown could have changed to the Qt Media Player because it wanted to set the mpv player but that was missing from the list, something like that? Or are you confident that it happened completely spontaneously? >>16591 No, sorry. The only option for now is to rig together an external script and pass the results on to hydrus via the API. https://hydrusnetwork.github.io/hydrus/client_api.html >>16592 It has been interesting seeing various users' vacuum experiences this past week. One guys runs it every week! In any case, I am super glad it is running fast now. 10-60MB/s seems typical, even on big files. BTW: I fucked up the vacuum UI this week! I updated the vacuum job all fine and everything, but I forgot to switch the dialog over to use the new 'do we have enough space to do a vacuum on this file?' check. It is still stuck on the old test, which checks your temp dir! So, you might get the dialog moaning about that this week and stopping you from doing the vacuum, even though it wouldn't need it. I fixed it on master already for source users, and everyone on v600. >>16593 >>16596 Sorry for the trouble! I will add an option to add it back in for next week! >>16594 Works ok here. Bar Request is a 400 error and happens when the client fucks up in how it formed its request. I can't remember for certain, but I think I have seen it on some CloudFlare captchashit responses too, although they tend to give 503 I think. Is there any chance you have an esoteric VPN setup or anything that might be altering your request? Does the hydrus 'file log' say anything more about the Bad Request response in its 'notes' column? What happens if you turn on help->debug->report modes->network report mode (warning spammy) and try a download? Where does it seem to fail, on the post URL or the direct image URL? >>16597 Thanks, I will check it out.
So uh has any progress been made on that pie in the sky P2P image sharing PTR-on-steroids thing? Putting the "Network" in Hydrus Network? I'm thinking it might become more important what with more and more people (burgers, at least) being ID cucked out of porn sites to "protect the children".
system:dimensions has height before width. Should probably be the opposite. If I select system:height and system:width in the search, editing them shows height above width, too. It is also inconvenient if I have one dimension in the search, and want to search by the other one or both. A string-based search like "640x480" for both, "640x" for width and "x480" for height would make it easy.
>>16602 >Works ok here. Bar Request is a 400 error and happens when the client fucks up in how it formed its request. I fixed the issue, it is not a problem with hydrus, I had changed the headers for e621 when the downloader bug was happening, and forgot to set then back to default. Thank you for your help.
Can Hydrus make an image collage?
(98.53 KB 1280x708 GhostintheShell2.jpg)

>>16602 >Not sure if this helps your investigation, but FYI: hydrus does not destroy its mpv windows atm I was digging even deeper till I reached the swap stuff and suddenly I realized this matter is well above my pay-grade and years of raw C coding are necessary to get the necessary skills. Even hopping from forum to forum in search of a solution requires time, which I don't have. So, I'm dropping this matter. >If you are having problems with VLC, perhaps it is unrelated, I agree. I switched to SMPlayer as default and the crashes are gone. >Do you happen to remember how you got set to Qt Media Player? Is there any chance you set it by accident The last time I tinkered with the video viewer settings was more than a year ago, when the Qt Media Player did not exist in Hydrus yet. Then, if Hydrus cannot change the viewer by itself, and I have no recollection of changing it my lself... then there is a Ghost in the Machine. KEK Or, perhaps, I'm speculating here, because of Hydrus not finding MPV (remember, it was a fresh Linux install and libmpv was missing), set the Qt Player as a fall back option instead of the Native Hydrus Viewer.
(381.33 KB 1634x1689 anonfilly - Qt hat.png)

I had a good week. I fixed a bunch of bugs, including a boot issue with the v599 macOS release, polished the new vacuum tech, and improved some quality of life. The release should be as normal tomorrow.
rule34.us subscriptions sometimes find urls like https://video-cdn1.rule34.us/images/58/d8/58d817e0581aa1a3927c0eed0a2c715d.mp4 that do not work. Removing the "-cdn1", however, fixes the link, i.e. https://video.rule34.us/images/58/d8/58d817e0581aa1a3927c0eed0a2c715d.mp4 I was just going through my ignored links in the subscription file logs and noticed this.
>>16618 Right, hydrus seems to check for both link types as it seems.
https://www.youtube.com/watch?v=9cEjcT-NBQc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v600a/Hydrus.Network.600a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v600a/Hydrus.Network.600a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v600a/Hydrus.Network.600a.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v600a/Hydrus.Network.600a.-.Linux.-.Executable.tar.zst πŸŽ‰ Merry 600! πŸŽ‰ I had a good week. There's a mix of all sorts of different stuff. I made a hotfix for a typo bug when right-clicking a downloader page list. If you got the v600 early after release on Wednesday, the links above are now to a v600a that fixes this! Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html macOS fix A stupid typo snuck in the release last week, and it caused the macOS App to sometimes/always not boot. Source users on Python 3.10 ran into the same issue. It is fixed now, and I have adjusted my test regime to check for 3.10 issues in future, so this should not happen again. highlights If you noticed the status bar was not showing so much info on single files last week and you would like it back, please hit up the new 'show more text about one file on the status bar' checkbox under options->thumbnails. I polished last week's new vacuum tech. There are additional safety checks, some new automatic recovery, and I forgot, last week, to move the vacuum dialog to use the new 'do we have enough free space to vacuum?' check, so that is fixed. It should stop bothering you about free space in your temp dir! Collections now sort better by file dimensions. They now use their highest-num-pixel file as a proxy, so if you sort by width, the collection will sort by that file's width. It isn't perfect, but at least they do something now, and for collections with only one file, everything is now as expected. We have a few ways to go forward here, whether that is taking an average of the contents' heights, pre-sorting the collection and then selecting the top file, or, for something like num_pixels, where it might be appropriate, summing the contents (like we do for filesize already). I expect I'll add some options so people can choose what they want. Let me know what you think! new SQLite and mpv on Windows I am rolling out new dlls for SQLite (database) and mpv (video/audio playback) on Windows. We tested these a few weeks ago, and while both went well for most people, the mpv dll sometimes caused a grid of black bars over some webms on weirder OS versions like: Windows Server, under-updated Windows 10, and Windows 10 on a VM. Normal Windows 10/11 users experienced no problems, so the current hypothesis is that this is a question of whether you have x media update pack. I am still launching the new mpv dll today, since it does have good improvements, but I am prepared to roll it back if many normal Windows users run into trouble. Let me know how you get on! If you need to use an older or otherwise unusual version of Windows, then I am sorry to say you are about to step into the territory Windows 7 recently left. Please consider moving to running from source, where you can pick whichever mpv dll works for you and keep it there: https://hydrusnetwork.github.io/hydrus/running_from_source.html If you run from source on Windows already, you might like to hit that page again and check the links for the new dlls yourself, too. I've noticed the new mpv dll loads and unloads much faster. next week I will continue the small jobs and cleanup. I'm happy with my productivity right now, and I don't want to rush out anything big in the end of year.
Edited last time by hydrus_dev on 11/28/2024 (Thu) 02:24:16.
>>16604 My overall arc in developing hydrus has been towards 'actually sharing files is easy, but managing files is hard', but I share some of your worries. Hydrus is siloed to computers you own, no cloudshit or need etc.. for the PTR, in part for this reason. If things do get tricky again, let's say the boorus get shut down, or everything is locked up in Real ID after captcha is broken by AIs, then I expect hydrus's IPFS plug-in will get a fresh look. >>16605 Thanks, width/height should be fixed today in v600, but I had to do it a weird way so let me know if anything is messed up. I agree about the need for a 'system:resolution' or just better workflow that allows you to put in height and width in the same single click. I'll have a think. >>16609 No, if you mean showing many images at once in the same window, I don't think so. You have probably noticed I'm not very good at 'pretty' anything, so I can't promise much in the future either. For some things like this, I usually use a third-party program, like ACDSee or ImageGlass etc.., and I just export the files from hydrus before I set up my clever glossy photo print job or whatever. If you have a program that can, say, read random images from a folder and present them in a slideshow or screensaver or whatever, a hydrus 'export folder' (off the 'file' menu), may work out well for you. Otherwise, what sort of thing do you have in mind? >>16615 Thanks, interesting. The actual code here is pretty simple, so I guess something weird happened. Let me know how you continue to get on here--I'd also love, if I can find the time, to figure out a new OpenGL based render mode for mpv in the next year or two, which will open up all sorts of support for difficult OS environments. >>16618 Thanks, I will check this out!
>>16621 Not him but I think he's talking about something like this? https://gelbooru.com/index.php?page=post&s=view&id=5093788&tags=touhou+collage+zun Like a bunch of images making another image. There's a specific program to make these but I can't remember the name of it.
Just jumped from 596 to 600. I had to go to the new exporting page and uncheck the "move flag" option for drag and drop file posting to work with 8chan.
Did vacuum for the first time in 5 years. Freed 12GB of space. Thanks!
I just started using Hydrus and have a question about tagging. Is it possible for Hydrus to parse filenames of files I'm importing? I can see that there's a regex parser in the "filename tagging (advanced)" window when trying to import, but what exactly is it parsing? I want it to parse the filename which is in the format "PATH/TO/FILES/EXTRACTOR/UPLOADER/[YYYY-MM-DD] FILENAME [VIDEO_ID].EXT". I can manage the rest of the path/filename with simple tags like "source:EXTRACTOR,creator:UPLOADER" but I can't seem to be able to find a way to regex parse the [VIDEO_ID]. Is this not possible or am I just on the wrong window? Would I have to parse my filenames for the VIDEO_ID outside Hydrus and import that as a tag in a sidecar.txt file? This is all on GNU/Linux so if Hydrus can't do it, I can just write a simple bash script to generate all these tags as a .txt file, import the tags to Hydrus as sidecar files and later just tag characters and such. Just wanted to know.
>>16626 I'm not entirely sure what you're asking for, but have you tried using any of the checkboxes on the righthand side of your image and seeing if they do what you want? I always use the add filename option myself.
>>16627 Yes, I've tried using the "add filename?" checkbox but it imputs the whole "[YYYYMMDD] TITLE [VIDEO_ID].EXT" as a tag. I just want the "[VIDEO_ID]" in the tag and was wondering if Hydrus has a built in way of parsing _individual_ parts of the filenames to use as tags.
>>16627 I'll mess around a little more with the "advanced" window's regex parser. I just clicked the ".*" button thinking it would add a literal ".*" into the regex bar only for a small menu to popup with regex parser information. I'll mess around in that for a bit and see if I can work something.
Is it safe to run Hydrus on a QLC SSD? Cells on those can only survive like a hundred overwrites, so anything frequently modifying the same file (like a database) is potentially dangerous. On the other hand, static media is exactly what those drives are made for. Alternatively, can I move just the "client_files" directory but not the entire "db" directory to another local drive?
Maybe the derpibooru parser really shouldn't add the source url as a url, because derivatives often link to the different source, including on the same site.
>>16630 You can have the database and media files in separate drives. I have my database in my SSD and my media files in an HDD. >Is it safe to run Hydrus on a QLC SSD? Probably not the database, no. Do you have another regular SSD that you could put the database in? I would think it's counterintuitive to put the media files in an SSD and leave the database in a slow HDD.
>>16632 Yes I mean different SSDs, not database on HDD. Is there a setting I'm missing or did you use filesystem links/junctions/mounts to do it?
>>16633 There is a setting somewhere on where to put the database and media files. I'm looking for it, but it should be there. I did use symlinks though, so my media folder and exporting folder are "technically" under the db folder, just pointing to a mounted drive through the symlink.
>>16633 >>16634 Found it. database -> move media files
>>16635 Thanks, I was looking in the options, did not consider that there is a special ui for it.
>>16628 >[YYYYMMDD] TITLE [VIDEO_ID].EXT Sounds likes it's fucked if you want to extract specific information from a filename.
>>16637 In the advanced window there's a regex parser that gets me close to what I want. Guess I'll just mess around with that regex for a bit to see what I can do.
I'm so close but why is it lowercasing everything!?
>>16639 I think all tags in Hydrus are normalized to be lowercase, not sure if you can force it to be uppercase or not
>>16640 Yup, I'm retarded. Forgot about that. Since its a tag, it's going to lowercase it. FUCK! Back to the drawing board.
>>16630 >Is it safe to run Hydrus on a QLC SSD? Cells on those can only survive like a hundred overwrites, so anything frequently modifying the same file (like a database) is potentially dangerous. Those QL cells maybe cant survive that many overwrites, but nowdays they don't necessarily need to. SSDs don't work like HDDs. HDDs overwrite the same sectors physically on disk again and again. SSDs are much much more complicated in terms how data is written to the cells. SSDs use features like Garbage Collection, Over-provisioning, TRIM, Wear Leveling etc. with complicated algorithms to lower the write amplification. Wear Leveling is used to write to the cells as evenly as possible, to not wear out cells that are rewritten more often than others. You can read about it here: https://en.wikipedia.org/wiki/Write_amplification#Wear_leveling Plus the table is an interesting read: https://en.wikipedia.org/wiki/Write_amplification#Factors_affecting_the_value A Samsung 870 QVO 1TB with QLC has 360 TBW (Terabytes Written), means that as long you write less than that amount, it should work without problems. I didn't read any studies/tests about QLC specifically, but there is years old information out there that shows that SSDs work much longer than what the manufacturers specified. Might have been TLC/MLC. So i think that you should be fine even with QLC. But you can check your TBW with tools that can read out the information, like CrystalDiskInfo. But most importantly, do Backups anyway!
(107.84 KB 1424x750 Screenshot_20241128_193820.png)

>>16630 >Alternatively, can I move just the "client_files" directory but not the entire "db" directory to another local drive? To my knowledge, the example in the "hydrus_client.sh" only contemplates to change the DB directory path, and also if you want, the Temp directory. You might try variations but mileage my vary. Check the screenshot out.
>>16641 >Back to the drawing board. There's no undrawing the lowercase tagging system. Hydev has repeatedly made that clear. Scenarios where the filenames need uppercase letters for some necessary purpose have been so rare for me that I simply gave up on the idea. Why do you specifically need things in uppercase?
>>16644 Was trying to get the video id in the tags, but instead I'll just put them in the urls the proper way I guess. Didn't know that was an option until now that I'm looking at more advanced options within Hydrus. I'll just parse the filenames first and put it in a sidecar txt or json file and have Hydrus import the sidecar with the txt/json file and put it in the url section for the file.
(232.13 KB 800x800 honk.jpg)

>>16642 >But most importantly, do Backups anyway! Wisest words ever written.
>>16639 That's devanon's choice and he made it pretty clear it will remain that way.
(106.14 KB 698x658 1431287647020.jpg)

Ever since Sankaku further worsened their site I haven't had Hydrus importing my favorites from Sankaku. This month I set up a new downloader looking at some of the other posts and stuff for help and it works -- but now I have a big gap in my imports. And what's worse is Hydrus Companion doesn't seem to want to do the green border highlight around images that are already in my Hydrus because Sankaku sucks ass, so my question is IS there a way to get the green border so I can see just exactly where the gap is? So I'm not just downloading a bunch of files I already have. If there isn't I understand but then my question becomes does anybody have a alternative to Sankaku? I've been using Sankaku for a while since it seems to have the biggest collection of images+videos from both Western and Japanese artists AND has a popular page so I can see new popular uploads I might like. I already use Gelbooru for Japanese stuff. Lastly I notice ever since Sankaku fucked their website up Hydrus no longer downloads all tags, it'll get tags for some images/videos but not others. Is there a reason or a fix for that? I would really appreciate the help.
>>16648 It's not like I wanted it changed. I didn't know that you could load urls using the sidecars. I thought it was just for tags so I figured I'd at least put the video id in a tag if I ever needed to look for the source. Having read more about the sidecars and importing urls, that is a much easier alternative than using the tags. Once I write my script to automatically get the video id from my filename and make a url txt file, I can import my video library using the sidecars for the urls and not have to worry about losing the video source id that used to be in the filename. After that I can just alter my yt-dlp script to output a txt file with the source url and not have to deal with it manually.
Welp, it's done and working. The final files have the url embedded in the url section without being in the tags on import. Put the script code at the bottom of the image into my bash script that handles autotagging so I can run it whenever I need it using a flag. Right now the link generation is hardcoded for youtube, but later I'll change it to generate depending on the extractor that it finds in the filepath.
(3.59 KB 200x200 27268838.png)

The package at Flathub seems to not have been updated since version 595.
>>16620 Happy 600!!! You're such a beast for developing hydrus.
(340.60 KB 1500x857 spectrum.png)

>>16654 It is called weaponized autism.
is it possible to have certain tags applied based on what site it comes from? like i'm downloading from [site] and want all files from that site taged site:[site] thought i'd seen an option for this before but can't recall where if true
>>16656 Not sure if that helps but try: network -> downloaders -> manage default import options... -> chose one of the list and 'edit', new window appears, then 'set custom tag import options just for this importer'. See image. There you can put in tags that get applied to every file that passes through that import context. Not sure if you can for other sites that aren't in the list, at least not automatically i guess. But manually if they have an url, you can search for system:urls and just give all those files with a certain url that tag.
I've been running from source on kubuntu fine, but I've updated to 24.04 and now I get the following: File "/home/admin/hydrus/hydrus/hydrus_client_boot.py", line 24, in <module> from hydrus.client.gui import QtInit File "/home/admin/hydrus/hydrus/client/gui/QtInit.py", line 139, in <module> from qtpy import QtCore as QC File "/home/admin/hydrus/venv/lib/python3.12/site-packages/qtpy/QtCore.py", line 135, in <module> from PySide6.QtGui import Qt as guiQt ImportError: /usr/lib/x86_64-linux-gnu/libQt6DBus.so.6: undefined symbol: _ZN9QtPrivate23CompatPropertySafePointC1EP14QBindingStatusP20QUntypedPropertyData, version Qt_6 I've tried rebuilding the venv with various options and specifying the newer library versions manually to no effect. Any help?
>>16658 I should add, I also installed the suggested libicu-dev and libxcb-cursor-dev, no change.
>>16620 >If you noticed the status bar was not showing so much info on single files last week and you would like it back, please hit up the new 'show more text about one file on the status bar' checkbox under options->thumbnails. thanks for the fix, but one more thing that the bottom bar used to show that's still absent is the archive time for files that are archived showing right after the import time. I'd appreciate if that was added back in as well (if you have the option enabled like I do, of course) I still have plenty of space down there, and it'd make a few things quicker again for me. Thanks!
>>16664 >>16620 devanon, was there a performance reason to make the default 'off' for this info in the status bar? Wondering why you made it default to that.
>>16657 thanks, that was exactly what I was looking for
>>16623 Thanks, I get the same; and, how interesting, it also breaks discord now. I'll adjust the tooltip, I guess this will stay a bit experimental and just a weird option to try for program x when nothing else works. >>16622 Ah, yeah, love that stuff. Best done in an external program that specialises in it. >>16624 Hell yeah! >>16626 >>16627 >>16628 >>16629 Please forgive some of this 'filename parsing'. It is ancient code back from the days when I had even less of an idea what I was doing, and it is all pending a rewrite to my newer string processing tech that has many more tools and test panels and stuff to preview what you are doing. Until then, however, if you want your video_id, try something like pic related in the 'quick namespaces' section of the 'advanced' tab. It basically works like a regex group, so the first match in your full file path gets set with the given namespace. Since you are "(blahblah) [video_id]...", try something like "\d+(?=....$)", or maybe instead of four dots you want "\d+(?=\..{3,4}$)" to capture files with three or four letter exts. Give it a try and see what it says. >>16630 I don't know anything about QLC drives, so I cannot talk too cleverly. Hydrus is heavy on a disk, but it only goes crazy if you sync with the PTR. I'm not up to date on the newest numbers for SSD drives, but I recommend hydrus as completely fine on a 'normal' SSD that has a handful of PB of expected lifetime reads since a heavy hydrus at PTR max might be hitting maybe 10GB a day, and that's only 3TB a year. But if QLC are short on lifetime writes, yes, I would stay away until you work it out. I recommend checking out >>16642 's numbers for specifics. If you don't sync with the PTR, I wouldn't imagine your database would do more than 100MB a day. So perhaps the starting point here is to not get the PTR and play around a bit. You can move the client_files and db directory away from each other. Help page on this here: https://hydrusnetwork.github.io/hydrus/database_migration.html You know what you want better than I do, but if you are worried about writes, the client_files directory is actually pretty much read only, so not a big worry for SSDs outside of the fact that your files tend to be big and you don't need them fast. Most users move the files to a slow cheap HDD while keeping the latency-sensitive database files on the SSD. Let me know how you get on!
>>16631 Yeah, I'm mixed on situations like this. Ultimately I have a bunch of URL logic that figures out and ignores bullshit sources after a certain amount of data is gathered, so I generally prefer to have the 'mostly good data' than nothing at all. Maybe what we need here is better options so users can control things easier. My 'import options' system is pending a bit of a total overhaul, and I think pulling the 'associate source url' stuff, which I think is in the 'file import options' section now, could do with pulling out to a 'pre-import checks' section or something, and then integrating into a favourites system for easy selection of previous setups. >>16639 >>16641 >>16644 >>16645 >>16648 >>16650 Ah, sorry for the continued trouble here. Another thing I want to do here is add more metadata parse destinations, so you'd be able to pull longer data into a note text (which has upper case) and URLs and whatnot. As it happens, I was also talking with some people today about a future upper-case-supporting 'single text line' metadata type so we can handle stuff like 'title' text better. I've never liked storing filenames and things as tags, because tags are not for describing, but searching. Having a pretty single line that you could pipe filenames and whatnot to would solve several problems. Once you've had a full play with this, I'd be interested in what you'd like it to do in future. I hate a lot of it and want to integrate all sorts of other tech from other locations, like sidecar stuff, so it is less of a pain in the neck. >>16649 I can't offer too much help, but I gave up on the site some years ago. They don't want people to download from them automatically. I don't think I support them by default at all any more on new clients. As for alternatives, I understand nothing is anything close to a 1:1 (which I guess is why sank have so many bandwidth problems), but the other boorus are good for what they do, and rule34.xxx covers a lot of western stuff, and if you are looking for spicy content, I believe r34hentai and paheal cover some of it, but you may need to rig a login system together to get hydrus to see it. >>16653 Yeah I think someone was telling me about another problem with it, maybe you via other means, but I forget exactly what the boot error was. Some bad library or something. It is easy to run from source these days, so I recommend everyone who runs on a package like this consider just moving to that, and then you can just git pull to update in like two seconds. All you have to do is run a line in your terminal to download the repo, and then run one .sh and you are all set: (couple more steps for Windows, but nothing super complicated) https://hydrusnetwork.github.io/hydrus/running_from_source.html >>16654 >>16655 Thanks, keep on pushing.
>>16658 I do not know, and I have never seen that specific problem before. It looks like your Qt .so file is mapped wrong in some deep way. I wonder if this is an environment issue, as I am not sure if your venv should be looking in /usr/lib for Qt .so stuff; maybe it is trying to pull your system Qt because it is earlier in your PATH than the venv, and thus it is getting confused when the Python Qt6 is calling one method and the (different version) system Qt6 doesn't have that. Check your PATH and see if something looks busted or duplicated as of recently. I'm afraid I am not expert enough to talk too confidently though. I am barely above a novice at Linux. Do you have a system Qt6 in your system package manager? Any chance it updated recently, and is there a version number attached? Alternately, do you have a system environment variable like this?: QT_API=pyqt6 If so, it may be that your OS is directing you to go to PyQt6 instead of what your venv will have which is PySide6. If so, force this in your hydrus boot script: QT_API=pyside6 And it'll force 'qtpy', which helps hydrus navigate the different available versions of Qt, to go, presumably, for what is in your venv. Another thing to check is to ctrl+f through your venv terminal log, if you can figure a way to do that, for anything 'pyside6'. Maybe there's a weird install error. If you are feeling brave, you could give the 'manual' source help a scan to learn how to activate your venv: https://hydrusnetwork.github.io/hydrus/running_from_source.html#what_you_need And then try installing PyQt6 with something like "python -m pip install qtpy PyQt6-Charts PyQt6" inside your venv, and then forcing the above QT_API=pyqt6 to select that instead. Let me know how you get on! I am sure there is a solution here, and I'm interested to know what you find. >>16664 Thanks I will figure out why the archived time isn't showing. I thought it was one of the options I have for 'hide archive time if it is similar to other times already seen', but I don't think it is. I bet it got hidden in all cases by accident in the rewrite. >>16665 Nothing about performance, but I was playing around with it when I was doing the rewrite and temporarily disabled the longer 'pretty' status texts, and both the code and presentation got suddenly simpler together, and I liked it. I realised I'd prefer new users get that simple/clean status bar to start with. Bombarding people with modified times and stuff from the start can sometimes overwhelm, and now, by default, the logic and display text for one file is the same as for n files. I don't like hiding away the texts amongst a million other checkboxes in the options though, so maybe I'll switch it to opt-out in the end. We'll see. Before anything else, I need to fix this archive time display.
>>16669 Building the venv manually didn't help, and I checked over the environment variables and there was nothing related there. Though I've had other software problems show up since updating and I'm about ready to just backup things, nuke it, and reinstall.
I thought e621 issues were downloader related, and last time it was updated in user-run repo was, like, 2 years ago. So i thought i had no choice but to become a normal human being now, but apperantly updating to v600a fixed it? Shows what i know.
>>16669 >Bombarding people with modified times and stuff from the start can sometimes overwhelm, and now, by default, the logic and display text for one file is the same as for n files. NTA, I could benefit from any of that info, so, whatever, but image resolution feels closer to the default info than to the optional info.
Hey hydev, would it be possible to set custom times for moving files? The difference between 1 hour and indefinitely is pretty large.
Sadpanda fucked how they serve thumbnails, and I can no longer get all the hashes files from a gallery skipingp fetching the post for the hash. But I can see that the post url includes the first 10 characters of sha1 hash. Could something like "stop after x files are already in db" be added, like the same logic subscriptions use to stop after finding urls already in. Alternatively, an "at least 1 file's hash regex matches a string" in the veto for parsers.
>>16649 Last time I checked, gallery-dl was good for downloading from shitty sites that Hydrus can't (easily) scrape. I'd say it's the holy grail of non-tag-searchable multimedia wrangling. It's extremely configurable so you could probably set it up to download the files and generate a sidecar, then tell Hydrus to automatically import them. You'd just need to set aside a day to dig deep into the examples on GitHub and trial-and-error your way to the perfect gallery-dl config.
I >>16658 fixed my install, found an ancient .bashrc include with a unnecessary LD_LIBRARY_PATH set Probably had it there to fix something ~10 years ago and it somehow hadn't broken things before now. I guess this is what eventually happens when you Ship-of-Theseus your home directory across OSs for over 15 years
(26.58 KB 743x214 heightwidth.png)

>>16621 >>>16605 >Thanks, width/height should be fixed today in v600, but I had to do it a weird way so let me know if anything is messed up.
>>16618 >>16619 >>16621 Hey, sorry, I just looked at this now and I didn't realise when I read your post that this is for a parser I did not write and a site I don't support by default, I presume this guy https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Rule34.us . I first read it as rule34.xxx. In order to stay sane, I can't offer too much support fixing external downloaders, so I'll drop a note in the discord that this downloader has your problem, but I won't fix it myself.
got a question if this is possible I am more or less a collsoal idiot and have a problem of my own making. my current session is I think over 700 pages, and total weight in ram is something like 17-20gb now, that said, on start up everything try's to open at once. realistically I think the only things that need to be actively be opened are active watchers and gallery downloads and a few top level pages that work with subscriptions pushing the files to them. so here is what i'm wondering, would it be possible for the program to remember all the stupid amount of pages I have open, but only load up the files when they get interacted with? I don't know if it would make the program a bit more light weight, or if it would help with stability given the stupid shit I keep getting myself into with too much stuff open, but it would probably help with start up times at the very least.
>>16680 You're right. I only just noticed this now, too! Sorry for the bother and thanks for your help!
>>16682 >700 pages >something like 17-20gb Anon is clearly a madhouse patient with an internet connection. That said, I support his request.
a kingdom for a tag view that only displays PTR additions/requests so I don't have to take needless amount of time not shitting up the PTR
>>16668 The nice thing about Flatpaks is that they give a single environment that doesn't change between different Linux distros. So Fedora might have a version of Python too new for the venv and Debian one too old (note I'm pulling these out of my ass), not even to speak about the other random dependencies like mpv and xkb or whatever. The Flatpak would at least in theory be the same on both and Just Werk with a one-click install. I switched to Flatpak because I was tired of fiddling with running from source, I tried it again just a bit ago and venv_setup complained about my python version and then said QtPy wasn't installed when I tried to run it anyway (it was).
>>16682 I would also like this feature, if possible. >t.189 pages 7.5 million session weight
>>16682 >>16693 Why not just use favorites searches function?
>>16694 Not the 700 pages anon, but most of my stuff is actually background tasks that I continually procrastinate on cleaning up. Lots of thread watchers & gallery downloads.
I had an ok week. I cleaned some code, fixed some bugs, and figured out a neat way to find still-importing pages before you close them The release should be as normal tomorrow.
I don't know how much of an edge-case this is, but I'm consuming a (bad) API that both - Doesn't publish a "next page" element, and - Returns an error (400) when you request an offset greater than the posts count, which it publishes Say you're consuming on `/api/:id?offset=100`, it returns 50 results (so 100 -> 149) and has a `count` on each result page, which is the total number of posts available. If for example `count` is 134, you requesting `/api/:id?offset=150` throws a 400, so Hydrus considers the domain as erroring, stopping downloaders and subscriptions alike. Is there a way to compare what's in the URL to what's in `count` mathematically ("if `offset=\d+` in URL + 50 is lower than `count`, return that as next page")? Or maybe a way to say "if there are less than 50 results on a gallery page, don't bother loading a next one"? Or even a way to tell Hydrus "if this domain throws this specific error, ignore it"? As there is no "next page" item, I'm just increasing `offset` from the URL class. Thanks in advance!
https://www.youtube.com/watch?v=dQKFeLA6Rrs windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v601/Hydrus.Network.601.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v601/Hydrus.Network.601.-.Windows.-.Installer.exe macOS app: No macOS App this week, sorry! linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v601/Hydrus.Network.601.-.Linux.-.Executable.tar.zst I had an ok week mostly cleaning and fixing things. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Last week, I hid the 'needs work' suffix on the duplicates filter page when its work was above 99% complete. If you liked having it even at 99.9%, there's a new checkbox under options->duplicates that lets you turn it back on. The label now also says the current search percentage. I fixed multi-line note .txt sidecar parsing. A recent round of 'let's make this parsing input cleaner' was accidentally collapsing the newlines, so it was all coming out as one connected line. This isn't the first time multiline parsing has broken here, so I've made a couple of unit tests to make sure this doesn't happen again. Some of the sidecar UI will also highlight multiline content better. It isn't perfect by any means, but a couple steps forward. When you close a bunch of pages and the client says 'hey, these 3 pages are still importing', I've added a button that says 'show me the pages'. It creates a little window with a list of buttons, with the page names, and clicking them will take you straight to the page. I tried a couple new things here, so let me know what you think. Qt5 I am officially dropping support for Qt5 this week. This affects users on older OSes who were running from source and who cannot run Qt6, typically Windows 7 era. The program still appears to boot in a new version of Qt5, but I expect I will simply break it one day by using something that is Qt6 only, and if the fix is not trivial, I will not work on it. I no longer maintain my Qt5 test environments. I understand that Windows 8.1 might boot Qt6 with some work, but it is not a sure thing, so if you are on 8.1 and have been running from source on Qt5, you may run into a roadblock soon, if you haven't already. Let me know how you get on, and I will update the 'running from source' help to talk about the options for people in your situation. The 'setup_venv' script, which source users rely on to create their code environment, now has a simpler Qt decision in the 'advanced' route, and also has an option to try PyQt6 rather than PySide6. next week More of this mixed small work, and if I can organise myself I will move duplicates auto-resolution forward. The macOS App failed to build just now because the runner was deprecated--I missed the notice! I will test out some fixes this week and post them on the discord for macOS users to try out! Sorry for the trouble!
Edited last time by hydrus_dev on 12/07/2024 (Sat) 22:57:54.
>>16710 >I am officially dropping support for Qt5 this week. Damn, it's so over I wish Qt6 didn't look so ass compared to Qt5 at 125% windows scaling.
(86.33 KB 1280x720 7485.jpg)

>>16712 >Damn, it's so over Not if you run Hydrus with Virtualbox.
Hydev, i got some questions: 1) When going to 'options -> importing', a DnD from Windows Explorer into Hydrus is considered a loud import, correct (except when you 'use custom file import options just for this importer' and change it to the 'quiet default' there)? Also when going to 'file -> import files...', which gives you the same window as DnD. Can the description therefore be updated to make that more clear? Because the description when going to 'options -> importing' says that the 'quiet' import context is also for 'import folders, subs, Client API', which confused me. Because this 'import folders' means the 'manage import folders...' feature, which automatically adds files as new ones are added in, compared to say a folder that you DnD from Windows Explorer. I think beginners might mix this up, just as i did. So instead of just 'downloader pages' for 'loud' import contexts, would it be good to also add something to the description like 'one time file/folder imports' (from file -> import files...) and 'DnD from Windows Explorer' or something similar? 2) When searching for archived files, the archived time doesn't display on all files, not in the main gui status bar and also not in 'manage -> times'. Is this because i checked the 'archive all imports' checkbox for those files (can't remember for all) and they were archived directly without the path of going into the inbox first, which is needed for an archived time? I think that's it. And it has implications for archive time sorting i guess, i just tested it. A new file that i added and archived directly without going to inbox first, has no archive time and sorts to the beginning of the files without an archive time, but not to the beginning of all the files, which it should. But it can't because there is no archive time for this file. Is that really that of an edge case to archive directly and noone noticed? Is that behavior on purpose or maybe i am doing something wrong? 3) Using 'system:similar files' gives two tabs, one with 'files' which i understand needs the sha256 hash. Maybe instead of saying 'Paste the files' hash(es) here', you could say 'Paste the files' sha256 hash(es) here'. Because first i tried it with other hashes and it didn't work. The 'data' tab is confusing me though a bit. It seems it needs pixel hashes. The description says 'just copy its file path or image data to your clipboard and paste'. Image data from clipboard worked by opening the file in my image viewer irfanview and 'copy' the image, or right-click and copy an image in Chrome. But how does one copy a file path and enter it so it works? Or is it an outdated description? Also i don't quite understand why there are two tabs in the first place. Aren't they doing essentially the same? Can you explain what the difference is between the two and in which cases they should be used? For me as a noob it just seems like it is looking for a a file with the pixel hash (data tab) or a sha256 hash (files tab) and then lists the potential duplicates. Why not use just one tab with one box where you can put any of the hashes in, even blurhashes/md5 for example? Wouldn't hydrus just look for the file with that blurhash/md5 and then list all the potential duplicates it already found for that file or is it more complicated than that? Thanks!
>>16670 Not sure if it helps, but I was setting up more test environments last week and setting up the venv in python 3.13 was a bit of a pain--I had to wangle a custom version of pyside6 and numpy because the stable versions that hydrus is fixed to are not available on so new a python. If your 24.04 has moved you up to 3.13, I wonder if you have similar package version issues? Maybe PySide6 failed to import so your qtpy fell back to trying your system Qt. Let me know how you get on! afaik python 3.10, 3.11, and 3.12 are all good now with the (s)imple venv setup option. I don't know about 3.9, but I suspect things are shakey on some systems. I hope to move us up to Qt 6.7 by default in January, and then 6.8, the earliest Py 3.13 will run, will be available in the setup_venv test script as the new 'advanced' version. I still have the numpy problem to figure out, but I'll see what I can do. Now I have these new test environments, I hope to keep a better handle on these version issues. >>16671 I provide e621 as one of the default downloaders in the client, so I take responsibility for fixing it in my weekly updates whenever something is broken. I'm not sure what any old copy on the downloader repo is; maybe one of my old ones, or something some guy made. That said I think some users had some CloudFlare or other CDN issues with it somewhat recently, I presume some dynamically assigned temporary range-bans on IP ranges that are sucking up too much bandwidth (probably guys doing whole-takes of boorus to train AI models). Same with rule34.xxx. It sometimes spits out 403 on search pages, and it is one of those newer 'click here to prove you are human' interstitial pages that hydrus cannot pass. Changing your IP seems to fix it (and obviously not being the guy who is sucking up hundreds of GB of bandwidth). >>16672 I was just talking with someone else about adding some finer options here. I'll play around with it, hopefully this week. Simple resolution might be nicer as default. >>16675 Sure, I'll figure something out. I'll say though that I hate blocking the program for any long amount of time and I am, although the project is dragging, in the middle of a file storage rewrite so this stuff happens all the time in tiny bits in the background, so my better preference here is to make the option irrelevant. Also I used that mode yesterday and when your client is large the timing is all fucked up so it blasts past the ten minutes timer to the next safe checkpoint to quit and it can be like two or three times as long arrrgghhhhhh. >>16676 I don't fully understand the problem here, but I think I get that it is a complicated and specific case. I'm trying to hardcode weird hacks for complicated problems less these days, since they tend to bite me later once I've forgotten about them, and unfortunately say 'complicated problems need human eyes to fix'. That's a lame answer to your question, sorry! Being able to check hashes in the document is an interesting idea--I presume this is of hashes that have already been downloaded? My parsing system still isn't clever enough for temp variable storage, and veto logic based on that might be even trickier to pull off, so I can't promise much here. Maybe the 'stop once you are caught up' mode for the normal gallery downloader is feasible as a simple checkbox. I'll have a think about it.
>>16676 >>16677 Yeah actually I second this. Hydrus's downloader works well for simple gallery/booru sites, but it falls over at the more custom places. gallery-dl is excellent, but I don't have personal experience with getting it to do much clever stuff. If you figure out any good workflows, maybe with sidecars, please let me know how you get on and perhaps we can gather some templates or scripts together, and I can figure out a big red 'import gallery-dl sidecars' template button in the hydrus import UI or something, to make it easier for others. >>16678 Thanks, that's interesting. Great! >>16679 Thanks, should be sorted in v601. I've been saying I did this in a weird way and I realised on Friday a much simpler way to do it, so I'll revisit this. >>16682 >>16693 >>16695 I apologise, but the unfortunate answer here is to clean up your session! I know the problem very well myself, and I know that if I figure out a bunch of tricks to defer page load, it will not help you process files faster--the session will only grow even larger. Helping users get through their firehose is a difficult and frequently philosophical problem that I don't have a good grapple on. Most users, including myself, get way more files per week than we can handle, but for handling that in the UI session I am strictly going to KISS. I do a little deferred load right now (that's when you see a page is set as 'initialising'), but doing it to excess, or deferring a downloader page's load, can cause trouble. Suddenly I have to think about what if the Client API talks to a non-loaded page, or how to handle a session save on a page that is semi-loaded in some complicated way, or content updates, or service updates. I will not wander far here, because I am not clever enough to not make a mistake. THUS, I must command you: figure out strategies for putting those pages that you have not touched in months or years to bed. Something I have done in my own situation is setting up 'processing' tags like a tag called 'read later' on 'my tags'. When I encounter some a watcher with a hundred long thread screencaps or something, I hit my shortcut that applies that tag (or like/dislike ratings, which also work well for this) to the selection, and then I know I can summon them again with my 'read later' favourite search. I can then delete the DEAD watcher without worrying. The joke of course is that I so rarely access my 'read later' tag that I may (still) never see those files again, but the difference is that those files are no longer clogging up my session, so it is a net gain. I have about twenty or thirty favourite searches now and use some every day and others once every few months. Many of them are sorted randomly and have system:limit=256, which is nice and quick, and since I rarely archive/delete more than 256 items in one go, it is no different than if I had loaded up twenty thousand files. My main session ranges within 250-750k, mostly from an active watcher page, and makes for a slick client despite having four million files. I've said elsewhere that as I have developed hydrus, I have come to understand how much I do not understand what a million files actually means. Best to hide them away from our human eyes and eat the infinite-queue-wait cake in pieces we can comprehend. Also I need to make session saving work better. The current custom save/load workflow sucks.
>>16709 Damn, how annoying. You'd think it'd give 404 rather than 400 in that case. As a side thing, it has long been plan to expand the URL Class system to have custom logic handling for different status code responses to handle these sorts of situations, but it will be too far in the future to talk about properly here. I wonder if it would be possible to do this veto. Hydrus isn't clever enough to do any comparator stuff within the parsing system, but I wonder if you could somehow trigger a veto if that number is too small. This is a first thought, and it is very stupid, but could you: - parse the gallery page - get the count (134) - given the gallery page URL using a CONTEXT VARIABLE formula, extract the offset n (100) - plus 50 to the n, 150 using 'integer addition' in a String Converter - concatenate those two with a ZIPPER formula into like 134|150 - use some incredibly bullshit regex to somehow test for x < y and if so replace the text with 'veto' - veto if the output of that formula is 'veto' The way to effect the regex is probably completely ridiculous though, or so convoluted by breaking it into like 1,1|3,5|4,0 and then applying some whack nested comparison list that you are dabbling in voodoo. The answer is probably that I should figure out a 'COMPARATOR' formula or something that works maybe like a ZIPPER and allows you to do number tests on the results of two different formulae and return custom strings, say defaulting to "True"/"False". I will think about this; I cleaned up a lot of formula code recently when adding NESTED formulae, so this may not be so difficult and we'll finally have some logic in the system. Let me know if you do figure out something clever here. >>16712 I actually altered a bunch of my text here last minute since I thought I had already killed Qt5 with my Enum rewrites. I didn't like being so imprecise in the release post draft with "I think I broke it already", so I spun up a quick Qt5 venv on Wednesday and was flabbergasted when it booted no problem. I won't officially support Qt5, but it seems like my immediate plans do not break it. Good luck, and let me know how things go!
>>16727 1) Thanks, I will clear this up. Your description is completely correct I think. Basically a 'loud' guy is anything that ends up importing through a page in the main gui, and a 'quiet' one is a subscription or import folder that will start and stop in the background when you aren't looking, or indeed an API call that happens without the user seeing. The options difference is mainly imo for setting 'presentation' options. You might like your subs to only present new files (to their popup button or destination page) but for your galleries to present things you already have and in archive etc... In the not-too-distant future, I expect to break up 'file import options' into smaller parts for easier customisation of imports. It is way too clunky atm. 2) This is crazy! Thank you for this report! How is it not recording an archive time? I will fix this, and see if I can figure out a retroactive fix too. We know the archive time for these files. 3) Thanks, I will fix that label to say which hashes. For a 'file path', that's just the path as a text string (like "C:\Users\YOU\Desktop\image.png" copied from notepad raw). It looks like that button is not parsing file paths copied from like copying the file in Explorer, which is a slightly different datatype on the clipboard, so I'll look at it. The two tabs are a bit of an ugly mis-mash because, as always, it was technically simpler for me to set it up that way. The main difference, and I'll see if I can fix the labels to clarify this too, is that the 'data' tab works like SauceNAO for any file, including stuff hydrus has never heard of (e.g. some image you just copied from your browser, or maybe even hashes your friend sent you from their client), but the 'files' tab refers to specific files actually in your database now. In the database, the 'files' tab basically looks up the pixel hashes and phashes live, so it is a wrapper/proxy for the 'data' search. The 'data' thing is something I hacked together to do that search immediately, with static values, and then it became a real thing and since they are secretly different system predicate types that store different data and search differently I squashed all the UI together but couldn't merge them completely. This may be in the weeds a bit, but a phash or a pixel hash can also refer to multiple files, whereas an sha256 hash refers only to one specific file. I don't have blurhash/md5 in there, but that's not a bad idea. I'd just have to write the hash lookups for it all. Thank you for your feedback! I will try and clarify things. Let me know if anything else stands out in future.
If a file is in two file services, and I want to remove it from one of the services ("processing"), then if the file is archived, I have to unarchive it. I wonder if anybody likes the way it is. What happens would make more sense if a file was archived per service. I remember when it was impossible to just move an archived file between services, and you fixed it.
>>16710 Management of the window on GNOME Shell is pretty broken now. Can't move the window, can resize, a part of it sticks out to the other screen.
>>16745 Wayland.
>>16745 >>16746 Not all windows, but "edit subscription query", "edit import options". "manage subscriptions" can be moved up and down, not to a side where there is no screen.
>>16744 >What happens would make more sense if a file was archived per service. Hm i don't see why it would make more sense. If you want a file to be archived in one, normally you want it to be archived somewhere else too. You can't remove it from your 'processing' service because it seems you have the delete lock activated for archived files under 'options -> files and trash', somewhere in the middle. Maybe deactivate it if you feel brave enough. But then before you process and delete a file permanently, i'd suggest delete into trash first as a safety net. @Hydev Just a minor thing and not that important: If i right-click a file that is in two services, then move the mouse onto 'delete', a submenu opens where i can chose between the two services, but there is no delete from 'all local file services'. If i chose any of them, a delete dialog appears where the correct one is already picked for me, but i also can chose to delete from all local file services as a thrid option, which is good. Wouldn't it be good to have maybe a line-seperated entry in the submenu that says 'delete on all file services' under the seperate file service entries? Something like: right-click - > delete -> file service 1 file service 2 ___ all local file services (<- add this, also for several selected files if they are in more than one) It's just a way to communicate to the user that this option exist. I actually forgot that the 'delete from all local file services' option appears in the delete dialog and thought that i would have to delete a file from each file service seperately, because it only showed me the two on right-click -> delete. Only bother if it's a quickie for you :)
Can i somehow check if imported file(s) was deleted from my disk and if yes, then remove them from hydrus?
little UX suggestion. I think the "default export folder" setting should be under "exporting" instead of "files and trash". I'd suggest either moving it there, or having some text saying to go to "files and trash" if you're looking for the default folder. last week I tried changing it and when I didn't see it under "exporting" I assumed that I was just misremembering that there was a setting for that at all, but then just today I found it by accident.
I had a great week. I worked on a variety of quality of life improvements, including an overhaul of the system:rating predicate UI, and I wrote a maintenance job that will fill in various missing file archive times. The macOS App also makes a return. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=vIlXkNswM1Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v602/Hydrus.Network.602.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v602/Hydrus.Network.602.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v602/Hydrus.Network.602.-.macOS.-.App.dmg (might take a while to appear) linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v602/Hydrus.Network.602.-.Linux.-.Executable.tar.zst I had a great week. We've got a bunch of quality of life work and a fix for missing file archived times. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights On update, your database will look for archived files that have no recorded archived time. You very probably will, especially if you are an longer-time user, and if you have a big db, it might take a couple minutes. There are two types of missing archived time--anything archived before 2022-02, and a recently discovered bug related to files that archive as they import--and a new maintenance job will walk you through fixing either with good synthetic values. I added some checkboxes to options->media viewer to further edit what the file info summary line on the top hover window (and optionally the main gui statusbar) says. You can now turn 'archived' on/off, and decide if it shows a time, and turn local file services on/off, again with time or not. The 'edit ratings predicates' UI gets a complete overhaul. Previously, you edited all the ratings services at once and it was simply not great to work with. I've changed it to work like everything else, where you edit each rating one at a time, each with their own 'ok' button. The various choices here are simpler and clearer now, too. A whole load of system predicates get label cleanup this week. Much stuff like 'system:width: has width' is now 'system:has width', and the system predicate parser is updated to handle more of these cases, both the old and new formats. Also, 'system:file properties' gets a nice two-column layout. The macOS App is back! Sorry for missing it last week--github retired version of the thing we use to build it, and I missed the notifications. We've updated from 'macos-12' to 'macos-13', and everything seems to be good. Let me know if you have any trouble with it! birthday and year summary The first non-experimental beta of hydrus was released on December 14th, 2011. We are now going on thirteen years. I had a great 2024. I managed to dig myself out of some IRL holes and kicked out a lot of code this year. I did not finish as many big features as I had planned, but this really turned out to be a good 'optimisation' year. The client is faster, more stable, and simply more pleasurable to set up and use, especially at large scale. The codebase is also generally a little less insane. Users also contributed many times this year, to fix downloaders, help with bugs, or roll out clever interesting new features and nicer platform support for Linux and macOS. The year started with an overhaul to millisecond-precision timestamps at the database level. We also got time editing for multiple files at once and time editing in the Client API, allowing for better mass-management of timestamps. I pushed on system predicate parsing, and now the simple system predicates' labels, which are also more unified in form, all parse if you paste them back. Thumbnail rearranging was added, and with it the 'incremental tagging' system, the long-planned system to auto-tag files with numbers based on their file order. We got the new 'Number Test' system, which includes precise user control over the +/- absolute or percentage value for a variety of system predicates. URLs received a complete display/storage overhaul, which was a hell of a thing to get finished but ultimately fixed a heap of difficult bugs. The new 'ephemeral' URL parameters also helped to support and optimise some difficult downloaders. Darkmode support improved, and we figured out how to set the program's custom colours via QSS stylesheet. Numerous ad-hoc labels and system terms were unified to the same formats and layouts across the program. Many widgets size themselves better. The thumbnail/media viewer menus were cleaned up and harmonised. Multi-column lists were overhauled to a much more efficient system that sorts faster and only needs to render what is in view, allowing for quick lists even with tens of thousands of rows. Tag filters became more efficient, allowing thousands of tags at once, and the new 'purge tags' tech for tag repo janitors enabled widescale PTR cleanup and lays the groundwork for more en masse tag removal clientside. The sibling and parent dialogs finally got their asynchronous overhaul, so they now boot quickly and can also untangle loops automatically. The parsing system gained some new clever formulae and string processing tools. Search pages received a bunch of OR quality of life. For filetypes, we added rtf, docx, pptx, xlsx, doc, ppt, xls, richer cbz data, animated webp, and, thanks to the long efforts from one user, finally animated ugoira. Database vacuums became easier. This was also, unfortunately, the year for Win 7's final sunset, and we are likely to see Qt5 go next year. I also removed the ancient 'local booru', now that user-made Client API projects can do its job much better. As always, there are still too many things I want to do. I am thankful for good health and enough money, and I expect and hope to keep working at this through 2025. I really appreciate the feedback, help, and support over the years. Thank you! If you would like to further support my work and are in a position to do so, my simple no-reward Patreon is here: https://www.patreon.com/hydrus_dev next week I only have one more release in the year, so I'll just do some very simple cleanup. Nothing big, so there's less danger of me breaking something over the Christmas break. >>16750 Thank you, done!
(15.74 KB 360x360 iOoQJDU.gif)

>>16755 Thank you!
hi, Is anyone else unable to download anything from kemono? post urls are ignored along with 'The parser found nothing in the document, nor did it seem to be an importable file!'
>>16757 Can confirm that the kemono.su parcer is broken in v602.
>>16748 >You can't remove it from your 'processing' service because it seems you have the delete lock activated for archived files under 'options -> files and trash', somewhere in the middle. Maybe deactivate it if you feel brave enough. But then before you process and delete a file permanently, i'd suggest delete into trash first as a safety net. If a file is archived and is in its final service, I want to remove it from the processing service easily, but not if it is only in one service. I'd use delete lock for when deleting is actually dangerous and not when the file is in "duplicate" services because it was imported multiple times. There is another place where delete lock annoys me, but I can't just disable it, because I would likely delete something wrong: the duplicate filter.
>>16755 congratulations! it's impressive to see how much Hydrus has improved in the years I've been using it!
(45.47 KB 996x752 hydrus_client_f1qhdjfFMv.png)

>>16755 Thanks for all the work you do! Hydrus is an amazing piece of work. I've been using it for 9 years now.
(782.18 KB 1280x720 shocked.png)

>>16761 >347,204 files >100% archived
(26.74 KB 713x499 hydrus_client_YOGKjBTBPD.png)

>>16762 Indeed.
>>16755 Cheers, Hydrus dev. I've been using Hydrus since version 196. Keep up the great work, you glorious bastard.
This would probably be something for after your break, but I'd really appreciate it if you could add and option for the deletion lock for archived files to make an exception for the duplicate filter, where deleting works normally there. I want to have the lock enabled, but I can't because then all the files that are marked as worse duplicates just stay and never get deleted, but deleting bad duplicates is why I'm using the duplicate filter in the first place. There's not much danger of accidentally deleting files you really like in the duplicate filter imo because you're explicitly looking at each file every time you make a choice, so I think it would be safe anyway.
I updated recently and I notice that when importing from pixiv tags are getting translated duplicates of the pixiv encyclopedia translation and while that IS certainly a neat feature for some/most I'm an overly pedantic fuck who likes to sibling tag my own (some tags are puns or wordplay that I like to preserve, or the translation is ESL-tier and I can see an obviously better wording or I just don't like the pixiv encylopedia entry and would rather fight this battle on my PC than on pixiv). I've done a cursory rummage through the options but I don't see a toggle. So, how do I disable this behaviour? >>16755 Keep up the greaet work man. I don't often post here but I still use the program basically daily and started around the early 200s. I cannot fathom using anything else for large image collections. This software, it was made for me! It's my software! drr...drr...drr...
>>16766 afaik downloaders don't yet have a way to disable individual components without deleting them. this is a feature I've seen requested a few times (and I'd like it too) so I'm sure that Dev will add that functionality eventually. for now, all you can do is delete that content parser, but then it won't exist anymore if you want it again.
Hey I fucked up the 'sort files by' menu sort order (ironic) this past week. Fixed on master already if you run from source, will be rolled into v603. >>16744 >>16748 >>16759 Thank you for this report. I consider this a bug. The delete lock should only apply if you send a file to the trash (i.e. delete from all local domains). I expect I wrote this code before multiple local domains were really baked in, which is why I had to hack the 'move' fix before. I'll fix this up; I don't use the archive lock personally so please let me know what else goes wrong here as you encounter it. I hate the delete dialog in general when you have a file in multiple services, too. I've written a bunch of logic to try to nicely navigate and do memory on the various options, but this shit gets complicated behind the scenes and it isn't good enough. >>16749 I don't think so, but I'm not sure if I totally understand what you want. If you want to do: - import some files to hydrus - delete them manually from the original folders for whatever reason - go back to the hydrus import folder and 'sync' the origin deleted status to the hydrus thumbs I do not think so outside of some crazy workflow where you, like, re-imported the folder and did some system:hash set theory to load up a page of everything that wasn't reimported and then delete that. If you know how to do scripting, the Client API might be able to do what you want, but it would probably be its own pain. I think it depends on your actual workflow here. Unfortunately, on my hydrus side of things, once an import is done, I forget about the original filename and location completely. >>16756 >>16760 >>16761 (well done!) >>16764 >>16766 Thanks, keep on pushing. >>16765 Thanks. I've resisted on this sort of thing before because it ends up being a nest of logical exceptions, but I think it makes more sense for the duplicate filter. I'll see what I can do. >>16766 >>16767 Yeah we don't have good ways to categorise and filter parsable content in this way. This is an ugly manual solution, but try: - network->downloader components->manage parsers - select 'pixiv file page api parser', click 'duplicate' - edit it, rename it to like 'pixiv file page api parser (my custom)', and in the 'content parsers' tab, delete the 'tags (translation)' entry I guess 'translated title' too? - hit 'fetch test data from url' and do a 'test parse' to make sure it is getting what you want. sub in an id into the URL from any work you want to specifically test - hit apply to save your custom parser - hit up network->downloader components->manage url class links, and find 'pixiv file page api' and double-click it and select your new parser You'll no longer get the translated tags. HOWEVER, if pixiv breaks and I roll an update into a new version of hydrus, your custom parser will not be updated, so you'll have to re-do this fix. Let me know how you get on!
>>16768 >This is an ugly manual solution, but try: Not particularly ugly (if esoteric) but it does exactly what I need it to. Thankya much!
(944.36 KB 848x1200 stechkin.png)

(75.57 KB 850x478 ksenia.jpg)

So here's a weird one. With the launch of GFL2 I'm starting to get a lot of new images. Thing is I have images of the "same" character from GFL1 but with vastly different designs. To make the matter worse nobody is tagging properly, everybody is just using the same character tag from the old game alongside a tag denoting the new game even though aesthetically the characters are (usually) very different, different enough to warrant a new character:character. So onto the question. Is there a way to have a conditional replacement? So a sibling simply turns one tag into another, a parent adds a tag alongside another(s). What I'd like is [character:stechkin (girls' frontline)] when alongside [series:girls' frontline 2] to replace [character:stechkin (girls' frontline)] with [character:ksenia (girls' frontline 2)] but NOT get rid of [series:girls' frontline 2] and not do anything at all to all my old images tagged with [character:stechkin (girls' frontline)] WITHOUT [series:girls' frontline 2]. It would save me so much work across all the different characters to be able to do this (UMP9/Lenna, M1895/Nagant, MP7/Cheeta, etc. etc.) . Attached are two examples of the example qtp2t. You can see design-wise it's basically an entirely different character.
>>16747 Still on 601, when I try dragging "manage subscriptions", the main window moves instead.
>>16777 It happens with other programs, too. Not Hydrus fault.
>>16778 This. Linux windows management is in such a fucking bad place right now I've been unironically using hydrus on my windows boot. As you say this is nothing to do with hydrus at all and everything to do with the gnome/KDE level guys. Developers are acting like x11 being old enough to have a whole ass family of children with the oldest going off to college soon snuck up on them and that after 1,159 discussion threads, 17 official requests, 8 strongly worded letters, 3 warnings and finally 1 hard deadline to switch over they're being put upon to redo everything at the last minute when in fact they've been grognard'ing about any graphical change in the worst possible definition of the word for 15 years. Any time between the age of your average zoomer they could have started this, they're only doing ANYTHING because they're being FORCED to and we're FORCING them too because we HAVE to. We've had to drag these rotund people through the sand so slowly that now even wayland is looking a bit grey and weathered and might not even be the best solution... but it's the boat we have and it's taken THIS LONG to FORCE people to use even THIS upgrade. While I viscerally hate the tabletification of recent releases of windows and mac and am not saying ALL progress is good progress... I genuinely don't know why the *nix space is like this. Developers will whine incessantly about having to implement anything beyond command "menus" and a neat swirling ascii of the arch logo, begrudgingly drag their oedematic feet on implementing windows3.1/macOS7-8 levels of windowed user environment and like under-ground dwelling albinos to the sun avoid anything improved beyond that because they just play with textual lego in VIM all day and never need to actually do anything with programs beyond make more programs. THIS is why "the year of the linux desktop" is always XX years away. It's not games, it's not the software itself, it's this one underpinning that causes all the other issues and the greater community's aversion to fixing it that makes using any -nix system absolute pain sometimes. Windowed user environments won and continue to win for a (myriad of) reason(s), tiktaalik has left the ocean, get over it. I wish the people arguing about this would join the people of roughly their mental health and mindset in using templeOS, it only has 16 colors (all you need really), it uses the holy resolution of 640x480 (can't see why you would want more than that) it's just there for programming more programs (what else is an OS for?), and virtually everything is textual (if you need anything more than this you just aren't very good with computers frankly)! Wow! Is it autism? Yes, yes it is. Even me ranting about it is. I've been around these people so long I've developed second-hand aspergers syndrome. God dammit.
(3.95 MB 3840x2160 1532385356532.jpg)

>>16779 > and am not saying ALL progress is good progress... I genuinely don't know why the *nix space is like this. Welcome to universal chaos galore, which in itself is a good thing if you exam it closely. Many competing projects to choose from and not always compatible among each other. Yeah, in a way it is just plain Balkanization of the open source landscape, BUT, here it is the beauty of it: No amount of kiked money can tell an independent coder what to do, and that in itself is Freedom. >templeOS Nah. It may look promising at first glance, but the final usefulness of a OS is given by the quantity of software available. To my knowledge, no fag has even bothered into compiling a simple text editor for it.
>>16757 >>16758 They changed the json so that the entire thing is now encapsulated by one level under "post". Here's a fix, hopefully I didn't miss anything. I didn't bother checking if the other parsers need fixing.
(12.90 KB 150x162 cherry bug happy.png)

>>16782 Danke. I imported this one and deleted the old one and it works again. t. different anon from either of those you replied to
>>16782 thank you anon!
I had a good week. I just did simple cleaning/fixing work to round out the year. The media viewer is less jank and I simplified the archived-file delete-lock. The release should be as normal tomorrow.
After getting the PTR I noticed some.. strange tags? Like the tag Miles "Tails" Prower from Sonic appearing on a fox girl image completely unrelated to the character. Can I do something about that?
>>16787 1. You probably know that you can change the tag service that is displaying you the tags from 'PTR' or 'all known tags' (which containts PTR) to 'my tags' (or whatever your own service is named). You can also create more tag services if you so please. See the button in the pic on bottom right where it says 'PTR' (or maybe 'all known tags' for you). Click to change it. 2. In the manage tags dialog of that file (right-click -> manage -> tags), you have a new 'PTR' tab at the very top next to your own tag service, that shows you then only the PTR tags for this file on the right side. If you 'delete' one that you think is wrong, you then get asked to enter one of four reasons to why, or you can enter a reason yourself. A janitor will review your petition and decide if it gets deleted. Not sure how fast that would happen, never done that. You shouldn't hope that you will like every single PTR tag anyway. There won't be perfection. Use your own tag service/services if you want that.
https://www.youtube.com/watch?v=yITr127KZtQ windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v603/Hydrus.Network.603.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v603/Hydrus.Network.603.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v603/Hydrus.Network.603.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v603/Hydrus.Network.603.-.Linux.-.Executable.tar.zst I had a good week mostly cleaning and fixing things to round out the year. There is an important change to the 'archived-file delete-lock'. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html archived-file delete-lock Under options->files and trash, there is an option to stop files from being deleted if they are archived. I never liked how I implemented this feature, since it tried to block both physical deletes and normal trashing. It interacts with some complicated service logic, and this has only grown worse with multiple local file services. Some users have asked me to write various exceptions, things like 'allow a delete if it comes from the duplicate filter' or just better multi-service handling. I had a proper look at this whole system this week, and I concluded that I have been trying to support something too complicated, and I should scale it back. Therefore: the archived-file delete-lock, from now on, will only stop physical deletes (e.g. deleting from trash). This is a much simpler problem to solve, does not impinge on multiple local file services, and still serves the main objective of being a backstop to stop accidental deletes of good files. I have tightened up all the related code here, and I am much more confident in the lock overall. Some changes to trashing logic that I talk about in the next section will cause the lock to be tested less frequently, too. Also, the normal 'delete files' dialog now automatically filters out locked files from the 'delete physically' options. I hope you will see fewer popups saying 'oh, just tried to delete some locked files, I don't know what to do but here they are, now you fix it'. I regret switching up the workflow for people who use this option, since I know you care about it. I should never have tried to implement so complicated a system in the first place, I think. Please have a play with the new rule, and let me know how it goes. I expect we'll want some nice clean way to purge locked files, when you want to clean your trash of archived stuff you duplicate filtered etc.., but there's only one simple barrier to get around now, so I feel better about approaching it. Let's see how it goes and what the biggest annoyances are, and I'll keep working. highlights I have fixed a bunch of jank UI in the media viewer. I put time into the top-right ratings/locations hover window and center-right notes hover window, and they seem to have instant correct sizing on my dev machine in all cases now. No more jitter where it might resize to be three pixels taller right after showing, or layout/position problems when changing media. I also fixed a bunch of jank with the volume slider flyouts. Let me know if you still have any problems, particularly on Linux or macOS! The program is now more careful about how it handles trashed files. Previously, a bunch of filtering systems, when asked to generically 'delete a file', would send a normal file to trash, but if they happened to run into an already trashed file, they would 'upgrade' the command and send the file to be physically deleted. This no longer happens, whether you are in the archive/delete filter, the duplicates filter, or an export folder/manual export with 'delete files after export' set--it just leaves the file in the trash. There needn't be a rush to clear the trash. In a related change, the duplicates filter page now will no longer accept a file domain that includes the trash--it'll want to stay inside 'all my files'. next week This is my last release of the year! It wouldn't be, but the schedule is odd this year and I won't put a release out on Christmas Day. I'll do a little misc hydrus work here and there, ensure I get a week of vacation in the middle of it, and be back to catch up on messages on Saturday January the 4th with 604 on January the 8th. Thank you everyone, and 𝔐𝔒𝔯𝔯𝔢 β„­π”₯𝔯𝔦𝔰𝔱π”ͺπ”žπ”°!
>>16790 >Locked from deletion if archived I've never encountered this. Is this something you have to go out of your way to enable? Meri Kurisumesu!
>>16791 Yeah it is a checkbox under options->files and trash, default off.
If anyone is struggling with not getting audio from MPV within Hydrus on Kubuntu 24.04 when running Hydrus from source, make sure that during setup you aren't using sudo to run the setup files. Follow the github guide exactly. Also, if your file path has a space in it, you'll have to edit the desktop shortcut by putting quotes around the Exec path. This is probably simple stuff for most, but I'm new to Linux.
This is one of the most useful and powerful tools I've used, thank you kind git contributors. Honestly can't wait for Auto-Resolution to become a thing ^^ so I can automate some parts of the duplication process. What does everyone else here change in their settings? I've mostly ever just changed the tag import settings so it doesn't let AI images get downloaded, though it took me a while to realize that I also needed to re-enable all the other tag checkboxes to import tags just like normal... Also changed the theme of course. White being default is criminal. I feel like I might be a bit too lazy to tag images myself from scratch, so I just end up using whatever booru I import goodies from... Do I just migrate tags, if I want to add/remove, from "downloader" to "my" and subsequently add/remove them or something different? What is the "sane and normal" way to go about that?
On Hydrus 600. Recently it's started freezing up. It's using 40% of my CPU and about 700MB+ of memory. I don't have much more going on than usual, and I've given it heavier loads than this without issue. >>16794 >I've mostly ever just changed the tag import settings so it doesn't let AI images get downloaded Can I apply this en masse to all my subs? Getting AI art that isn't actually by the artists it's tagged for is a retarded degradation of what artist tags are for. >I feel like I might be a bit too lazy to tag images myself from scratch It's definitely an insurmountable wall of autism that I've been climbing for years now without end. It'll probably take me another year at least to finish.
>>16795 Yes actually! If you open your subscription, then at the bottom is "import options(some set)", then default options -> "set custom file import options..", then tag, then "set file blacklist" ... you can add your no-no tags in there OR the more sane method perhaps? network -> downloads -> manage default import options -> <double-click whatever site you are trying to filter> -> "set custom file import options..", then tag, then "set file blacklist" ... obviously, you will have to check the actual tags for AI or other tags the sites use Hope this helps
>>16796 Danke. I forgot how this worked since I never do anything but individual artist subs that grab zero tags and apply my ideal artist tag.
(2.02 MB 2628x1915 mane 6 - merry christmas.jpg)

>>16790 >and 𝔐𝔒𝔯𝔯𝔢 β„­π”₯𝔯𝔦𝔰𝔱π”ͺπ”žπ”°! Merry Christmas!
(67.87 KB 244x297 elizant swords.png)

I jumped to 602 and immediately experienced the slowdown. I cut down on tabs so I'm using half as much memory and that didn't fix it either. I think I've identified the issue. There's consistently a slowdown whenever i play certain apngs. It's just these bugfables ones I stitched together with apngasm a while back. Everything else works fine, and these used to work fine too.
>>16651 What is that font?
>>16770 I know exactly what you mean by a conditional replacement, and I am afraid there is not. The tag siblings and parents systems almost killed me, they are such a logical pain in the ass that quickly get almost too complicated for me to keep up with, so I have generally decided not to try doing any conditional rules until, perhaps, several overhauls in the future if and when the code is magically cleaner and simpler. So, your main option here, I think, is to figure this out using the Client API. If you know any scripting, you'd be doing searches of "gfl -gfl2" or whatever and then reading the tags and writing new ones according to your rules. You might do something like "gfl stechkin -gfl2 -ksenia" or whatever the logic of the final job would be so you can more easily pick what needs the tag replacing. If you have never done any scripting before and don't know what JSON is, this might be a bit of a stretch to be the thing you learn with. Although, that being said, figuring out a simple python script is pretty easy from nothing if you are comfortable having ChatGPT walk you through it. If scripting and the API is off the table, then I'd say replicate this situation but in the client. Have a favourite search or two that run the "gfl -gfl2 -(character names you'd add in a big OR)" search(es) and then either pick through it manually or shape the search carefully so you can just go 'ctrl+a, F3, remove tag, enter tag, ok' real quick and then you do that every three months. Given you want to do multiple characters, this might be too annoying to keep up with, but perhaps it would work with one or two, or maybe playing around with this will let you get a feel for what you might automate with a script later on. Let me know how you get on with this. >>16777 >>16778 >>16779 >>16780 Sorry for all this. I've heard several reports now. It sounds like these window managers are going crazy with fairly normal window stuff, but if I've set a bad flag somewhere, which is entirely possible, I'm open to fixing it once they are identified. Let me know how you get on, and when I update default Qt to 6.7, which I hope to test out in January--which will also update hydrus's 'test' version of Qt to 6.8 for 'running from source' users--let me know if things change or magically get fixed. >>16787 Some of these weird tags are a legacy of when I was the sole admin of the PTR. I accidentally briefly allowed 'shadow->shadow the hedgehog' parent or sibling, I forget which, and this was before siblings and parents were virtualised and completely undoable. I think there's a bunch of 'green hat' or something for a similar reason. I'm not sure if tails got in with the same mistake, but I know there's some bullshit sonic tags because of the shadow mistake. If you see them, please do fix them in the 'manage tags' dialog. Just remove them and click an appropriate petition reason or enter something like 'stupid sonic tag', so the janitor team can fix it. There's also a host of just bad mis-parses from sites that, say, suddenly decided to change their html so the 'recommended' tags section has the same header as the post tags section, and for a couple weeks some of our downloads get bad series tags. Again, if you see them, please do just open up 'manage tags' with F3 and double-click anything that is obviously wrong. Thank you! >>16793 Thank you--I'm not the world's expert at Linux myself, but I'll see if I can fix that shortcut thing to get quotes or whatever, and I'll update the running from source help to say 'don't do it with sudo'.
>>16794 I'm really glad you like it! Setting up your default tag import options is important. Setting a simple blacklist in there was important to me so I don't get stuff I don't want. If you have a powerful computer, you might like to bump the various memory values under options->speed and memory. Don't go crazy, but if you can have a, say, +50% larger image cache, then you'll handle 4k+ images a bit smoother. I have to be conservative with most of those sorts of settings for the defaults. The 'per-filetype handling' under options->media playback has some clever zoom preferences and drawing quality stuff once you go into each filetype. Also check out the favourite search system, the little star beside the tag autocomplete input box. It can help you keep your session small and fast since you can close and quickly re-open your processing pages and not need to keep them always available in the background. For the tag services, I don't know what is the 'right' thing to do, but I think I'd say keep the stuff that gets downloaded in the 'downloader tags' service. Use 'my tags' for things like 'favourite' or other super subjective stuff that you might use for personal processing or 'I need cool post images, so load up my "funny reaction image" files' searches. It is usually healthy to keep tags of different broad strokes separate, in separate services. Blending them together can make some workflows more simple, but it can't be undone, so only do it after much thought. I personally don't do much manual tagging, but when I do, I do series/character stuff mostly, since that's what I search for. If you haven't encountered my 'two rules to not going crazy' yet, I wrote them up properly a little while ago here: https://hydrusnetwork.github.io/hydrus/getting_started_more_tags.html#tags_are_for_searching_not_describing I'm not sure exactly how new you are, but if and when you feel like you are 'confident' with the program, I'd love to hear about what has been difficult and easy to learn. Feedback from new users is always helpful so I know what I need to add to the 'getting started' help and so on. >>16795 >>16800 EDIT: I get a crazy 'too many events queued' error in my dev log and the UI slowdown when I load that apng with both the new and the old mpv dll, so either that is a long-time mpv bug with apngs of that sort or mpv is fritzing out about something and I'm not processing the event properly. Either way this suggests it is not so much, in that case, about OS or GPU drivers, but a broader cause. I will investigate this more for the new year. Thank you, this is interesting and also a shame. I've got another report from someone that they are seeing videos stutter a bit with the new mpv dll, so perhaps we will be rolling back after all. Can you please run a test for me? This is the process: - shut the client down - extract a new mpv dll to the hydrus install dir, replacing the existing one - boot the client, check some videos that were bad before Please start with this one, which is what we had until a couple weeks ago: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20230820-git-19384e0.7z I think the dll will come called 'libmpv-2.dll', and you'll want to rename it to 'mpv-2.dll', replacing the existing one. This is the new one I currently bundle with the Windows builds, if you want to revert: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20241020-git-37159a8.7z And there are many many more here if you are feeling enthusiastic and want to try another: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/ You want the x86_64 release, but not the 'v3' version. Worst case scenario I just roll everyone back to the 2023-08-20 build in a few weeks, but if you do happen to play around with other dates, is there one that works particularly well for you? Maybe like 2024-06-ish? Also, can you tell me a bit more about your OS? I'm assuming you are on Windows here--do you have a slightly older graphics card, or maybe a very new one? Are you on Windows 10?
>>16805 >>16800 Addendum to this: my native renderer is pretty good on apngs and seems fine with this guy, so if these apngs are nuking you in particular, we can route around it just by hitting up options->media playback->add->animation: apng and then set the native viewer instead of mpv.
>>16805 >- shut the client down >- extract a new mpv dll to the hydrus install dir, replacing the existing one >- boot the client, check some videos that were bad before >Please start with this one Immediately fixes all my apngs with issues. >Are you on Windows 10? Wangblows 10, RTX 3060. It doesn't use my graphics card. It makes my CPU usage shoot to 40%. I have a i5 13600KF. Temps remain constant. >>16806 >we can route around it just by hitting up options->media playback->add->animation: apng and then set the native viewer instead of mpv. Did this and made sure to apply it to both the media viewer and previews. This works as well, in addition to getting rid of the checkerboard background for transparent files with duration in the preview and media viewers, which I'm not fond of. Would it be a bad idea to do this for animated gifs as well to get rid of the checkerboarding?
how do I see the HTTP headers of a response? I basically want to see if my existing rate limits match well with X-Ratelimit-Used
>>16794 >This is one of the most useful and powerful tools I've used Correct! > I just end up using whatever booru I import goodies from As you generally should. I use that and the pixivs import options. I add a few of my own tags as I see fit then ship it. And by "ship it" that gets us into workflow. What I do is have a browse sesh on pixiv/---booru, copy my file addresses to save them then go to my URL import tab of pixiv, process things how I want them, right click and archive them which removes them from the import tab and is what the program considers "done" (you can always go back and change things but import/archive is a handy thing for knowing what you consider to-do and done). When that is done I go get more images! and or shitpost on a futaba or discord where I can tag search my images and drag them right into post or chat. Some things you might wanna consider as a newbie that I think help * become familiar with parent and sibling tags. especially if you use pixiv where you can use a sibling as a proscribed translation of an untranslated tag. * read hydrus-kun's guide for best tag practices. There are things in there like avoiding putting the series in a characters name unless you absolutely must (like one of 20 characters named "zoe" or "aeka" with no last name provided). The reason to do this is you want to have space in a tag to include costumes or alternate versions of a character (a la [character:belfast (shopping with the head maid)] as opposed to her default appearance) so space in the character tag is premium. You might not initially understand ALL the whys but when you get into it you will. * figure out early if there are additional tags or types of tags you want to use to save going back and redoing too much. I like event:event tags. Like [character:sekibanki, event:halloween] or [character:suika ibuki, event:the maids choclate hell] or [character:cirno, event:cirno day 2022] I find this more flexible than character:character(costume)/character(event) because sometimes they arent in costume and character namespace is at a premium. Like sekibanki, a lot of halloween art just has her as her dullhan self but is still clearly halloween themed with pumpkins or trick or treating or whatever, her character tag shouldnt be different but there IS a contextual difference. Or, an event in a game that is a named thing that happened and has unique art elements or story beat references, or a community event like cirno day where I want to be able to seee all cirno day 2015 content with a search. You WILL figure almost all of these out after using the program and have to go back to implement them but if you CAN think of one ahead of time you'll do yourself a huge favour. WHEN, not IF, you don't and have to go back just don't beat yourself up, it happens to us all. * if it matters to you figure out how you want to handle "rating", either as a tag or a service. By default you get booru/pixiv ratings imported and can add rating:rating to your search but I, for me, find that three-tier safe/sensitive/explicit system to be too simplistic so I use my own five tier system and my preferred implementation is as a rating:level tag that I can search that is top of the tag order to be seen at a glance. This isnt strictly better than a service which would be services>manage services>add>local numerical rating service where you can set up a "whatever out of whatever" system then search images "at or below 3" or whatever level of lewd you want. I don't use this just because accidentally clicking the rating field can change the rating and that annoys me the 3 times a year it happens and I catch it immediately lol. >>16803 >Sorry for all this. I've heard several reports now. This is (almost assuredly) NOT your fault, this has been a 15-20 year slow moving train wreck that is only just now colliding. EVERYTHING sucks right now. Basically every linux channel is mostly wayland growing pains drama related at the moment. >they are such a logical pain in the ass <...> No worries, it felt like something just on the fringe of the current sibling/parent setup as to be doable. If I was willing to "lose" a tag in the process then I could implement it now just using sibling/parent tagging. >So, your main option here, I think, is to figure this out using the Client API <...> Sounds great, I generally don't write programs from absolute scratch but I can totally scriptmonkey. Seems pretty open ended for whatever tag automation nonsense I'll ever need to get up to as well and I could probably do a whole lot with it. I'm assuming this will be in handled in services, or?
Hey, how do I change the order of tags in the selection menu? I tried changing the order in the settings but when I click an image it’s still alphabetical or alphabetical by category. I really would just like booru style when browsing my own images, so author, series, character, blue tags, meta tags. Help?
AllTheFallen recently went down and came back up due to DDoS. Now all my subs are broken. Did they change something with the site itself, or are downloaders just blocked by their new DDoS protection?
(6.61 KB 512x118 Shimmie Parsers 2024-12-20.png)

(8.86 KB 512x148 vidyapics.png)

(8.82 KB 512x142 vidyapicsandshimmie.png)

Made a vidya.pics downloader & tag searcher since I couldn't find it in the repo. It relies on the shimmie parsers which I tweaked to remove underscores & convert artist: namespaces to creator:. I didn't touch the simple tags parser so that's not included. One file is just the shimmie update, and then the next doesn't have shimmie, and the last has all of it in one package. May be crap but it seems to work just fine.
>>16813 The ddos protection is pretty strong, hydownloader doesn't even work
>>16811 >I tried changing the order in the settings Which settings? I think you mean options -> sort/collect -> 'namespace file sorting' box. This is not what you are looking for, because this is for FILE sorting and not TAG sorting. So the files get sorted in the thumbnail view by clicking on the sort button above the search box, then 'namespace'. Here your newly created schemes appear. >so author, series, character, blue tags, meta tags There isn't a way to sort the way you want right now. If you 'sort by tag' and 'group namespace' it gives you: author: character: meta: series: angel bird cucumber So first come the namespaced tags (colored) then the unnamespaced tags (blue). You probably also found out already what the other tag sorting options do like 'sort by subtag' and the 'no grouping', which don't give you what you want. There isn't a way to sort like you want afaik and Hydev talked about it somewhere here in this thread or the last, but said he wanted to enable this someday. The options under 'tag representation' -> 'namespace colors' (the box at the bottom) looks already like a good start for Hydev to allow to make a custom sorting, maybe by having a checkbox 'allow custom sorting' above/below that box, that enables to move those entries up and down with arrows after that. Just like you can in the 'namespace file sorting' box.
Anyone here know how to set it so my pixel art isn't blurry? I am certain the image itself is fine. No idea what I should change in settings -> media playback -> image
>>16805 Honestly, I'm not sure how NEW I am. I tried using the software at maximum 2 years back, when I built an AMD Ryzen 9 5900x and Nvidia RTX 3070 Ti PC. Not really sure what I considered difficult back then. Honestly, I was pretty content trying to figure things by myself and using a search engine for more information when needed. I think the biggest unknown was what download page to use. Was pretty confused at the time! While I did read somewhere I should stay away from using the subscriptions when starting from scratch on an author or some other tag I might want, I still do use it just for that. I think the currently most confusing part of Hydrus for me might be the bandwidth rules. Checked out the about page on a booru I import from and learned that there is a hard limit of 1000 posts per request. No idea how to put that into practice so my subscription doesn't stop entirely, because it hit a bandwidth rule. Thank you for all your hard work ^^


Forms
Delete
Report
Quick Reply