/t/ - Technology

Discussion of Technology

Index Catalog Archive Bottom Refresh
Options
Subject
Message

Max message length: 8001

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Ghost Screen
Don't forget the global announcement this week
Saturday Evening


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

You may also be interested in: AI

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #9 Anonymous Board volunteer 01/03/2024 (Wed) 19:10:11 No. 14270
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Users can choose to download and share tags through a Public Tag Repository that now has more than 2 billion tag mappings, and advanced users may set up their own repositories just for themselves and friends. Everything is free and privacy is the first concern. Releases are available for Windows, Linux, and macOS, and it is now easy to run the program straight from source. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . Hydrus is a powerful and complicated program, and it is not for everyone. If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/ Previous thread >>>/hydrus/20352
Edited last time by hydrus_dev on 01/20/2024 (Sat) 18:36:21.
Hydrus is the best, thanks hydev!
Sankaku is the worst, fuck that bulgarian cunt!
I have been manually tagging my files for over a year.
https://www.youtube.com/watch?v=xNKXyr6YN8Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v557/Hydrus.Network.557.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v557/Hydrus.Network.557.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v557/Hydrus.Network.557.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v557/Hydrus.Network.557.-.Linux.-.Executable.tar.zst I had a good week back after the holiday. There are some bug fixes and improvements to system:hash parsing. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Tag filters (which operate the various tag whitelists/blacklists across the program) now edit much much faster when they are full up with stuff. You can paste 5,000 items into an empty tag filter in less than a second now, and removing one or a handful of items from a filter with thousands of items already in is now instant. Previously, these things could take many seconds or even minutes due to inefficient overhead. CBZs that have four or fewer pages should now be recognised correctly. When you copy 'system:hash' and the 'system:similar to' predicates to clipboard, you now get a much longer string that includes all the hashes in the predicate. These copied strings are all parsable, meaning you can paste them into the same client elsewhere or a different client entirely and it should all just work in a complete loop. The human labels are updated to give you more information, too. There may be other predicates that are currently parsable if you type carefully but do not themselves copy a parsable string, but I am not sure which they are, so if you run into one, let me know and I'll see what I can do. If you run from source on Windows--or you'd like to--but you haven't installed the extremely convenient but bulky-to-install 'Git for Windows', I had to go through the installer again to set up my new dev machine, and I wrote out a guide for the full 12-page wizard here: https://hydrusnetwork.github.io/hydrus/running_from_source.html#core It is actually easy to do, and almost everything can be left as default and things will be fine. In any case, Git is great and lets you update in about three seconds, so if you run from source on Windows, give it a go! next week My old dev machine died just before Christmas, but I bought a new one and am back to normal work. For next week, I'd like to get some sort of system:num_urls figured out.
>>14276 Thanks man! You're program is Awesome!
Also, a shoutout to the people making the Ai taggers! You're making it even better!
>>14276 Glad you're back anon. /)
>>14270 Not a new issue, but README.md and some image files in docs/images/ and static/ are marked executable (755 rwxr-xr-x permissions), while .sh files are not marked executable. > For next week, I'd like to get some sort of system:num_urls figured out. \o/
is there any way to mass change my subscriptions from sankako to gelbooru? I have 987
(12.79 KB 1453x87 Capture.PNG)

>>14282 You can export and import subscriptions in the manage subscriptions dialog. Select a subscription and click "export" then "export as json". Open it in your text editor of choice. At the top you'll see the tag search the subscription uses. The tag search is identified by hash and by name in the data structure. See pic related. You'll have to do a find & replace to change the highlighted section. Export a subscription with a gelbooru tag search to see what it should look like. Find & replace the sankaku one with the gelbooru one. Import it again to verify it worked and is now searching on gelbooru instead. Once you know it works, select a bunch of your subs and modify them all at once. Also: Don't import/export all 987 subscriptions at once, that would probably crash hydrus or something. Some artist names may be different on gelbooru versus on sankaku, so you may have to change the query.
I accidentally double clicked the resize bar when resizing the preview and the whole preview vanished. How can I make the preview reappear?
Are there any issues stopping someone from making a generic *.booru.org downloader? There are many unique and specific boorus on there all using the same software, so it feels like a massive easy win. (I don't know the details of how the downloaders work, if it's a straight-forward task then I'm happy to try prototyping it, or even just a template for installing a [x].booru.org downloader)
>>14278 Seriously, I am often skeptical of ML hype but image recognition and tagging is one of the real legitimate uses of it. And with how atrocious most boorus are at tagging (with no intention of improving or even forcing people to add the most basic tags) this kind of thing will be a lifesaver.
>>14290 I'm using an older Ai tagger that got taken down, but if you look back at the old thread near the end, Gardivor? took it up and has it running. I'll probably switch to him later. If you set the threshold at .10, it will label loli, which is probably the most untagged type of pic there is. I hate that stuff, so I just delete it. It's really good at recognizing it. The only problem so far is that you get a lot of false loli positives on very realistic or pics of real people. It sees something in them that makes it think loli. Maybe it's a hint that the Ai thinks we're all children still. :p
>>14289 If the booru is indeed the same software, all you have to do is a make a new url class for that booru (and a gallery downloader) and link it to the already existing parser. Simply duplicate an existing url and change the domain. If you want an universal url, then I think that's not possible.
>>14291 Heh, it's trying to tell us something! I don't know much about loli crap either but I have gotten the impression that they made relatively more photo-realistic art than other communities because obviously real photos are illegal (let alone horrible for other reasons). If that is the case, an AI training model could end up linking the two too much and leaning into false positives.
>>14286 I will need to tweak some tags and delete thousands of dupes but this is a godsend is dupeguru the best way to find dupes this days?
>>14294 I've thought about something like this myself. Using something like NoClone to just go through the image files, then repairing Hydrus database by vacuuming. Just make sure you backup your database and all your image files before trying it. It would be interesting just to see how good Hydrus has been at avoiding dupes.
Does anyone know how to backup all the settings? Is there one config file or many and where to find it/them? I mean primarily the settings that one can find under 'file -> options', the so called 'manage options' window. Just in case i want to have those settings on other clients that dont have the same files and databases. I don't think the hydrus website explained this.
>>14294 dupeguru is good. but hydrus finds dupes already so i don't understand?
I posted this in the previous thread right at the end but I'm reposting it since I don't want it to get lost since I think it's an actually useful feature. >>14258 Would it be possible to expose information about subscriptions through the API? My thought was for 1-click exporting/importing subscriptions to hydownloader but I bet there could be other uses for it. I briefly looked at the API docs, I can't grok them very well, but I don't think this feature currently exists.
>>14295 > just to see how good Hydrus has been at avoiding dupes. I'll let you know in some hours Downloaded 42gb off gelbooru so far
>>14297 I just use a backup program named Macrium Reflect to back up all the Hydrus Database, thumbnails, and files. It works well.
>>14301 As I mentioned, I only want to backup the files that are responsible for the settings in file -> options ('manage options' window) and maybe other settings that can be setup elsewhere, to use in a client wiht other files and databases. As a general backup program, i already use FreeFileSync.
>>14293 You are stupid.
>>14304 No, he's not. It's good we have Ai tagging programs to tag what uploaders have uploaded untagged. Images and video without tags gets past our Blacklists. A few hours of the Ai looking at your latest mass download shows you what you have really downloaded. And then it's just a quick mass delete on all tags you didn't want.
>>14297 >>14302 The settings is part of the database (the 4 .db files), so you can't really backup only the settings, you'll have to backup the whole db. Though if you know sql, you could edit the db to only have the settings and then merge that into other clients, but I don't know which tables have the settings or if they are in multiple tables and such.
>>14287 Heh, that's a good one. I don't know, but if you open a new tab, it shows up again.
>>14308 Thanks for the help! Well, seems i have to record a clip or make screenshots then to get the settings over. Not a big deal. >>14287 Lol, first i thought "c'mon, it can't be that hard", then i tried it myself and got fucked too x) But restarting Hydrus did solve it actually. Restarting does reset the size of the preview panel to its default for all tabs/pages, so if a preview panel on one tab/page for example is big and the panel in another is completely gone, restarting Hydrus makes them the same size/visible again. >>14309 That helps too, but if you have a tab/page with complicated search terms and dont want to setup a search from new, you can just right-click the tab/page -> duplicate page. It gets the default preview panel size too. Hydev could maybe put in a setting (checkbox) if there isnt one already, which would save the state/size of the preview panel per tab, for those who want to keep it different per tab. Personally i don't think it's necessary though. Also the double clicking on the resize bar makes it vanish without a way to get it back. If you resize by mouse left-click drag and make it so small that it vanishes, it is possible to grab the resize bar and resize it to make it appear again. So maybe the double-click thing is a bug?
>>14307 I imagine he's referring to the claim that lolicons are making more realistic art than others.
>>14300 This shit took forever I feel like hydrus didn't detect a single dupe after switching from sankaku to gelbooru and just redownloaded fuckingeverything, got 76k dupes
>>14313 Hydrus doesn't do anything with dupes unless they are identical files, that's on the user to decide.
>>14314 Yeah, but I wonder if it's doing it on exact identical files as well. Some files in my collection sure look like identical dupes. I would have to run NoClone or something like it on the files to be sure though.
>>14315 Don't confuse visual duplicateswith mathematical duplicates. Hydrus only looks at the file's hash to determine duplicates during importing. If there is a single byte different between 2 files, even if they are otherwise exact pixel-for-pixel matches, they are NOT identical files. There's loads of reasons that can happen, most usually to do with metadata or from people "optimizing" images. You will just have to run thru the duplicate filtering process. 76k dupes will probably take less time than you think if you're smart with your filtering criteria.
>>14315 Even if it's a pixel for pixel dupe, if the hash is different, Hydrus will still wait for you to do manual duplicate processing. This is because some files will have metadata. It may be possible to set it up so it autodeletes pixel dupes and always chooses the one with or without metadata, but it still needs to be run manually after you download the files.
(26.91 KB 271x294 stairs.jpg)

>i cant wait to be a useless piece of shit all day and tag all these images >FUCK IM EDGING FOR HOURS ....................................
>>14281 Thank you, I will see what I can do! >>14282 Just to add my thought here, I would either: A) create a new subscription, and then use 'copy queries' and 'paste queries' to copy the query texts from the old to the new B) just change the 'sankaku' to 'gelbooru' at the top of the subscription edit dialog, just under 'site and queries'. Also hit 'check now' on everything. A is more work, but it would be generally safer/cleaner and will lead to nicer timings. B should 'just work', I think, but it might take a couple weeks or months to figure out how often it should be checking everything. If you have lots of separate sankaku subscriptions, not one sub with many queries inside it, I would make use of the 'merge' button in the larger 'manage subscriptions' dialog. >>14287 >>14309 >>14310 Try pages->sidebar and preview panels->show/hide a couple times. I think that'll reset for all pages. >Hydev could maybe put in a setting (checkbox) if there isnt one already, which would save the state/size of the preview panel per tab, for those who want to keep it different per tab. Personally i don't think it's necessary though. Yeah, I'd like per-page options, I'm still toying with how to do it and not make a clusterfuck of certain other data stuff. This is related to the options object I'm talking about later, too, actually. >>14289 >>14292 BTW it is a near-term goal, fingers crossed, to finally figure out multi-domain URL Classes, which will make this job one step easier. We'll probably be able to spam a thousand booru.org domains into one. >>14297 >>14302 Sorry, there's no excellent solution to this right now. The settings are stored in a couple of different locations in the database, and parts of them are unique to each db, so are not migratable. I hope to have better options export/import in future! When I figure this out, I'll also sort out 'reset to default' tech too, which is still sorely lacking. If you are comfortable with SQLite, then head on in to client.db and rip out the dump_type=22 item in json_dumps. That's the main options object, but as I said, injecting it into a new database without any modifications might cause an error, so don't do it on anything now brand new or backed up. >>14275 >>14318 based >>14272 >>14277 >>14280 Thanks lads, keep on pushing.
>>14317 >It may be possible to set it up so it autodeletes pixel dupes and always chooses the one with or without metadata, but it still needs to be run manually after you download the files. Yeah, this is what I want. Delete all other dupes, and then merge all the tags into the one image left.
(226.33 KB 2048x1676 pepe rubbing hands 2.jpg)

>>14319 >We'll probably be able to spam a thousand booru.org domains into one.
>>>13914 My nigger-rigged solution for tag definitions is a simple text file with one tag and its definition per line. Hopefully something like a booru wiki gets added some day, I quite like how existing booru sites can show example images and link to other similar tags.
>>14320 ...you do know that hydrus has the entire duplicates processing system, right?
>>14327 That's not automated. You have to check each and every comparison, even if they're pixel dupes.
Very glad for the new Ai taggers. So much loli not being tagged now. I downloaded about 10000 new images from Pixiv ( a major culprit of no or few labels ), and the Ai tagger tagged over 900 of them as loli. Probably a few false positives, but 99% correct on them. So many uploaders not tagging anymore. And if you really hate some fetish or style, you don't want to be ambushed by it. Especially over 900 times! Thank you again people who made the Ai taggers! It is Appreciated!!
Can we get a "delete both" action in the duplicate processor? I hate it when I get about 100 files from an artist at once and I try to do duplicate processing to weed out what I already have or what I just got a better quality version of, but it's making me make decisions on a bunch of dupes of entirely new images within the download that I don't want.
One reason I don't like relying on the, Booru tags, AI tagging which is trained on booru tags, or the PTR which scrapes tags from boorus (I think), is that some big sites like danbooru and gelbooru strictly use the xgirls tag for futanari.
>>14333 Yeah, AI tagging sounds nice but until I can train something on my own collection I'm not too interested. There are several tags I use that boorus don't and vice versa, beyond what tag siblings can fix.
(4.01 KB 274x106 Screenshot.PNG)

>>14332 Press the delete key on your keyboard.
>>14334 I find that the ML (machine learning is a much better term than AI, thought it's obviously the more popular expression) tagging at least gives me generic tags for things that had no tags, or very limited tags. I'm sure that eventually there'll be more / better models to choose from. I'd love an easy way to train your own models, but you would need a pretty large data set to do that.
>>14335 Damn, I'm retarded. Thank you. >>14336 How big must the dataset be? I have nearly 20,000 image files I've manually tagged so far.
I had an ok week. I fixed some bugs, improved some quality of life, and figured out a basic first version of 'system:number of urls'. The release should be as normal tomorrow.
What are good ways to find tags without parents (including parent tags without parents) with their quantities?
>>14154 Here is your 4chan comment parser and my parsers for subject, post number and flag. This is for 4chan thread parser -> subsidiary page parsers -> posts -> content parsers. [26, 3, [[2, [30, 7, ["comment", 18, [31, 3, [[[0, [51, 1, [0, "com", null, null, "com"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[9, ["<br>", "\\n"]], [9, ["</?[^>]+>", ""]], [9, ["&lt;", "<"]], [9, ["&gt;", ">"]], [9, ["&quot;", "\""]], [9, ["&amp;", "&"]], [9, ["&#039;", "'"]]], ""]]]]]]]], "comment"]]], [2, [30, 7, ["flag_name to \"meta:posted with flag\" tag", 0, [31, 3, [[[0, [51, 1, [0, "flag_name", null, null, "flag_name"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[2, "meta:posted with flag:"]], "Discord"]]]]]]]], null]]], [2, [30, 7, ["post number to \"meta:post:\" tag", 0, [31, 3, [[[0, [51, 1, [0, "no", null, null, "no"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[2, "meta:post:"]], "40643692"]]]]]]]], null]]], [2, [30, 7, ["post number to \"post #\" note", 18, [31, 3, [[[0, [51, 1, [0, "no", null, null, "no"]]]], 0, [84, 1, [26, 3, []]]]], "post #"]]], [2, [30, 7, ["subject", 18, [31, 3, [[[0, [51, 1, [0, "sub", null, null, "sub"]]]], 0, [84, 1, [26, 3, [[2, [55, 1, [[[9, ["<br>", "\\n"]], [9, ["</?[^>]+>", ""]], [9, ["&lt;", "<"]], [9, ["&gt;", ">"]], [9, ["&quot;", "\""]], [9, ["&amp;", "&"]], [9, ["&#039;", "'"]]], ""]]]]]]]], "subject"]]]]]
>>14339 and tags with 1, 2 or 3 parents.
https://www.youtube.com/watch?v=crJst7Yfzj4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v558/Hydrus.Network.558.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v558/Hydrus.Network.558.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v558/Hydrus.Network.558.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v558/Hydrus.Network.558.-.Linux.-.Executable.tar.zst I had an ok week. I figured out 'system:number of urls', and you can now import rtf files! Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights So, you can now search for 'num URLs'. You will find it under a new stub, 'system:urls', at the bottom of the normal system predicate list (where 'system:known urls' has also moved). This first version is simple--it counts all URLs, regardless of how important, but I can see a future version having the ability to scan by 'post URLs' or specific URL classes. Give it a go and let me know if you run into any trouble! Also, thanks to a user, you can now import rtf files! There's no word count yet, but I think it should be doable in future. If you select multiple tags, you can now hide or favourite them all at once. There's a bit of workflow and presentation improvement here, too. The old 'tag suggestions' box called 'favourites' is now called 'most used'. It was stupid to have two tag systems called 'favourites', so this is now fixed. You still edit them under options->tag suggestions; they just have a more precise and less confusing name now. If you are an advanced macOS user (i.e. you know how to build something with xcode), you might like to check out Hydra Vista, a new user-made macOS App that presents your client (via the Client API, e.g., perhaps on another computer) in a booru-like wrapper. https://github.com/konkrotte/hydravista next week I would like to add timestamp editing to the Client API.
Hi everyone. I'm hoping someone can help me with making a downloader. It's an odd use case for Hydrus but I want to see if I can use it for managing a my shopping lists at Temu.com. What I'd like to do is this: For a product page, I want Hydrus to download all media (images/video) on a product page, associate the product page url with the images, and parse the page for keywords that I can use for adding tags. For example, many sellers sell the exact same items and use the same images so I can use Hydrus's image hash checking and duplicate filtersing to group the product page urls for the same image, which I can then use to make price comparisons.. The issue I'm having is that the website is heavily Javascript dependent, which I'm inexperienced in. The initial page request doesn't load/show any information initially and uses the scripts to make subsequent data fetches. I'm wondering if there is enough information in the html response I can use to do what I want or if there's too much Javascript that I couldn't use Hydrus realistically for the job. Any advise or help you can give would be awesome. Can anyone help?
New user trying to work out how to get around Hydrus, sorry for the dumb question. I'm still trying to work out the ergonomics of processing my initial import of ~20k files (and ongoing). A significant chunk of my files are booru hash names and Pixiv IDs. Is there no way to have these processed (pull tags from a booru/Pixiv) when importing them from the filesystem, only when downloading them through Hydrus? Is the expectation for users like me to rely on the PTR to save on manual tagging (or script it using the API)? I have found the file lookup script tab, but that's too involved to run on every file.
>>14344 I'm not an expert, but I mainly downloaded from two boorus which have a way for you to search images by their hash. So I made a basic script to go through each of the files in my formerly-horrendous Downloads folder, search for them on each site (it's just an API link), and if found, then use the Hydrus client API to make Hydrus automatically download that file with its tags. That second part is just: (shell script) curl --header "Content-Type: application/json" \ --request POST \ --data "{\"url\":\"$1\"}" \ -H "Hydrus-Client-API-Session-Key: ###########################################################" \ -H "Hydrus-Client-API-Access-Key: ##########################################################" \ http://127.0.0.1:45869/add_urls/add_url So there are certainly options for automation, and with 20k files you'll definitely want to lean on that (importing and manual basic tagging on 1k was enough work for me) but best to see if anyone here has made something more robust.
>>14345 That seems feasible enough with some regex to find files that look like IDs/hashes. Was a little afraid it would be more detailed based on some of the projects I found searching for Hydrus. I'll start throwing hashes at multiple boorus, pull tags into individual services, and use the main UI to consolidate them. Cheers.
When Hydrus subscriptions sync, and you're keeping the tags from the booru you got the files from, does Hydrus also check old urls for new tags and add them to your already downloaded files? How is this sync different from the file sync? It has to be, right?
>>14344 You do have tags downloading in your settings, don't you?
>>14348 This will scrape tags from the boorus. You can also do like I and a few others do, and use an Ai tagger to send tags to "my tags". I've found the Ai to be pretty precise, at least for the general tags. It even picks up things like "hair bow". :p
>>14348 Correct me if I'm wrong, but that doesn't do anything if you import files from disk ("initial import").
>>14345 >search for them on each site (it's just an API link), and if found, there's no need for your script to check if the booru has your file, hydrus can do that. just have a script that can go through a folder of files and for each one go from "081b69c578de52d33a3954d86d3b442a.jpg" to "https://gelbooru.com/index.php?page=post&s=list&tags=md5:081b69c578de52d33a3954d86d3b442a" and put the results in a text file. then simply copy paste all those urls into a url downloader page. >>14346 boorus typically use md5 which is 32 characters. something like "[a-f\d]{32}" is probably good enough.
(52.18 KB 839x607 13-04:55:44.png)

How do I get Hydrus from source to use my qt6 theme on Linux? I have qt6ct set for dark mode but Hydrus stays in bright mode. I have been using one of the qss' but I prefer when everything follows the same theme.
>>14342 >So, you can now search for 'num URLs'. You will find it under a new stub, 'system:urls', at the bottom of the normal system predicate list (where 'system:known urls' has also moved). I don't see it, and don't recall ever seeing system:known urls. You don't have it hidden behind some setting that you always have on but is off for normal users, do you?
>>14324 >>14331 Thank you! >>14334 >>14336 >>14337 My private dream here is that as it becomes easier to train your own models for various things, we'll be able to pipe our non-tag data, like archive/delete decisions, into new models, too. I'd love to have a thing that knows what I like and dislike, even if it is only somewhat accurate, so I can spend my human time on only the most uncertain cases. There's also some fun 'probably in ten years' ideas like, if you train a model to eat an image and output 'you would 92% like this image', you can then invert it and say 'assuming 100% like, what pixels fit best?' and have it either produce novel images or modify existing images to be more to your liking. We've seen '(re)draw this image in the style of artist x' work super well, so while we aren't there yet in infrastructure terms, this whole idea generally seems feasible to me. I'll say, though, that while I understand the basic concepts here and I've dabbled in making pictures of princesses in front of castles like everyone else, I don't have the raw technical experience on the guts of making models, and as much as I have plans and notes I just don't have the time to do it, so I'll probably focus on making the Client API as open and convenient as possible, and we'll pray that an up-and-comer who is neck deep in this tech can provide the other side. >>14343 I do not work on downloaders directly much any more, but a general good trick when you have a javascript site is to load the site with your browser's developer mode on. Watch the network tab, and you'll see all the requests the js is making. Fingers crossed, there will be one request that is 'json', and it'll be something clean like site.com/v0/api/products/gubbins/search?page=0&num=50. That's the site's internal API. Download that file yourself and hopefully your browser will render the json beautifully and you'll be able to rip everything you need on the hydrus side by simply pointing at that URL instead and using a JSON parser. The 'API redirect' stuff in the URL Class UI helps out here, where you can say 'every time hydrus sees this html page, actually download and parse this json instead'. The imageboard parsers work this way, so check them out for an example. Unfortunately, a lot of sites don't use such nice JSON any more. The cloud-isation of sites, now a thousand different containers in some virtualised fog, means things are more machine-friendly and less simple for humans to go through. Often instead of 'search.php' or something, you'll get dynamically named '0d45gf.js' shit that hydrus just isn't clever enough to walk through.
>>14347 No, it does not re-check old URLs for new tags. It only gets new stuff, and furthermore, in general, any normal URL that hydrus has seen before is never visited again (that's the quick 'already in db' results you see in a typical repeat gallery download). There are a bunch of ways to force it to recheck, although it is ideally only ever on a one-time basis. Unfortunately, this is the technical issue at the heart of why I made the PTR; there just isn't a way to do this (on our scale) without wasting a ton of bandwidth and annoying server owners. I am planning a maintenance system/workflow that will allow us to do retroactive checks, where you'll hit up known urls for new metadata, but I will be careful how I do it. If you think about a medium collection of 300,000 files, and then checking them only once a year, that's still ~1,000 page hits (and multiples more if the files have several known urls) on top of your existing download work, every day, from every hydrus user, and only ever growing, in order to capture what is usually a handful of tags here and there. The PTR has a bunch of drawbacks, but it solves the syncing problem very efficiently. Some other guy runs a new subscription on the content you have already have, and his uploads to the PTR are then synced to your machine, and you don't have to do anything. >>14352 I'm sorry to say I don't think it is (easily) possible. Your system-level Qt6 themes are actually some clever C++ objects, non-portable, while hydrus's Qt6 is inside a private python environment and, afaik, can't always access them. I know some Linux users do see their system theme on boot, but if they alter anything under options->style, they lose the ability to switch back until a restart, so I think this is all based on some environment variable tech where the local python Qt is instructed to consult system-level .so files or theme directories, but I am just not expert enough to talk about it. >>14353 Sorry, try scrolling down, if it cuts off around 'tag as number' or similar. I think, since we added a couple, the default height setting for the autocomplete dropdown is like two short of the current system list length--I'll check this and make sure to expand it if it is.
>>14270 Can you make it possible to copy a list of tags from the 'Manage tag parents' dialog? Maybe highlight several and right click -> copy child tags or copy parent tags. As of now it seems to only copy the first child tag of the selection.
>>14355 >If you think about a medium collection of 300,000 files, and then checking them only once a year, that's still ~1,000 page hits (and multiples more if the files have several known urls) on top of your existing download work, every day, from every hydrus user, and only ever growing, in order to capture what is usually a handful of tags here and there. Can't you limit your scope so you're not looking for files missing "only a handful of tags", but instead files that are obviously under-tagged which are likely to get more tags later? Make sure it only checks files with "number of tags < x" once a year? I think your solution with the PTR is better though. >if it cuts off around 'tag as number' or similar Yep. If I change the monitor my Hydrus window is on to the smaller one, I can see the last two lines. May be an issue specific to me because having multiple different size monitors doesn't mesh well with that big qt update. Now that I know they're there though, I can just navigate to them with the arrow keys.
Trying to pull from a tumblr user using the gallery downloader, every download attempt results in an error status. "Looks like HTML -- maybe the client needs to be taught how to parse this?" here is one of the source urls: https://www.tumblr.com/blog/view/dovewingkinnie/738626839585701888 I'm trying to scrape posts from dovewingkinnie
>>14362 Yep. That looks like HTML. Reveal the code within your browser. i.e. on Firefox, its "inspect" when you right click on the page. You'll need to write a downloader / parser for it.
>>14362 >>14363 You can find how to do so in the "help and getting started guide" under the help tab in Hydrus. I remember it's buried in their somewhere. Good Luck. It's not easy.
>>14364 EDIT: I was just looking at Cuddlebear92's page. Under downloaders, there is a "Tumbler - liked - page" downloader. It might be what you're looking for. It's in .png format, so you can just import it into Hydrus.
>>14362 Don't use the hydrus tumblr downloader. It doesn't get every image for some reason. Example: https://putrydraws.tumblr.com/post/158563259427 It will only get the first 10 images even though there are 13. Use gallery-dl instead. https://github.com/mikf/gallery-dl But you may need to set up oauth for tumblr which requires an account.
Hydev, one of the sites I've been scraping periodically has been having SSL cert issues. Is there a way to bypass the error and continue the scrape, or do I just have to wait it out? >>14362 Use TumblThree, has a lot more options than both Hydrus and gallery-dl for downloading and supports logged-in only blogs.
>view file history >shows graph from db creation to today >add "system: import time: since 1 year ago" to search >still shows graph from db creation to today, but only shows stats for files that have been imported since 1 year ago Is there a way to limit the graph's X-axis itself? Once you have several years with one it gets a bit cramped. It would be nice if you could specify a start and end date to zoom in on that defaulted to 'creation' and 'now'. Also, minor bug with that graph: unchecking 'show deleted' and then modifying the search will revert to showing deleted (box still unchecked), but checking the box to show deleted will hide them again.
Something happened to gelbooru now? I got a new subscription to some artist that have 146 files configured it to get 200 on first check and only got 62 Just to be sure I did the same with gallerydl and got all the 146 files
>>14370 I did a reset just to be sure and still is only downloading 62 files
>>14371 >>14370 I see the issue now and why the lack of files, it is not downloading loli stuff Apparently now you need to actually login or something?
>>14372 Login didn't fix it Hydrus can't see stuff you need to login to see now on gelbooru, for some god damn reason
>>14370 >>14371 >>14372 >>14373 You don't need to login to see loli. You just have to go to Account (even if you don't have one) -> Options -> Check "Display all side content". It's been this way for very a long time, so maybe something in the html changed recently that prevents hydrus from automatically turning on the fringe content toggle?
>>14374 Yes because even with a login from some acc that have the "show all content" checked still won't download dem lolis I guess I'm not the only one that gave up on shitkaku and started raping gelbooru servers and they are upset
>>14375 The problem is entirely on your end. I just set up a new downloader to test it and it still gets loli files. Perhaps you set the downloader to grab files from the wrong booru and are instead getting files from some site that bans loli? Is your gelbooru tag downloader up to date with the one available on the github list?
>>14376 >Is your gelbooru tag downloader up to date with the one available on the github list? I'm using hydrus 557, am I supposed to update downloaders too? Where is it and how do I update it? Because now that I'm checking I have been missing stuff for around 2 weeks, when I updated, mebe something broke then
>>14377 Like, the gelbooru downloader from cuddlebear github is 4 years old and this is a 2 weeks issue
Did a test with a clean instance of 557 Still getting only 62 And of course its gelbooru
>>14379 Have you made a test downloader that just grabs the first few loli files on the site yet to see if it really is loli subscriptions that're the issue for you?
>>14380 I just added some mostly loli artist and got 17/413 files
>>14344 >>14345 >>14351 There's a way to do it without scripts: https://wiki.hydrus.network/books/hydrus-manual/page/file-look-up You might want to decrease the wait time between gallery searches to something like 1 second down from 15. The manual method using links from >>14351 may be faster though, because there's no wait time. >>14370 Just add this cookie for gelbooru in network > data > review session cookies.
The derpibooru parser doesn't fetch tags with data-tag-category="content-fanmade" (e.g., fanfic:anthropology), "spoiler" and "error". I am very disappointed, because I couldn't find what I downloaded using the tag. The following are content parsers for those tags. I don't know what "spoiler" and "error" are for, so they are namespaced. [26, 3, [[2, [30, 7, ["tags content-fanmade (unnamespaced)", 0, [27, 7, [[26, 3, [[2, [62, 3, [0, "span", {"data-tag-category": "content-fanmade"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "data-tag-name", [84, 1, [26, 3, []]]]], null]]], [2, [30, 7, ["tags error", 0, [27, 7, [[26, 3, [[2, [62, 3, [0, "span", {"data-tag-category": "error"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "data-tag-name", [84, 1, [26, 3, []]]]], "error"]]], [2, [30, 7, ["tags spoiler", 0, [27, 7, [[26, 3, [[2, [62, 3, [0, "span", {"data-tag-category": "spoiler"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "data-tag-name", [84, 1, [26, 3, []]]]], "spoiler"]]]]]
I had a great week. I completely overhauled how file timestamps are stored across the program, adding millisecond resolution so 'sort by import time' is more reliable, and I then added timestamp editing to the Client API. The release should be as normal tomorrow.
>>14391 RADICAL!
>>14392 Yes, my dick is radical.
So do I understand correctly that right now sankaku, hentai foundry and pixiv (explicit) are all borked and there's no way to fix any of them? sankaku with its yet another url type change that makes hydrus see links as galleries, the age old issue of PHPSESSID with HF (even when cookies are copied) and whatever's wrong with pixiv login.
>>14394 >sankaku with its yet another url type change that makes hydrus see links as galleries Just add a new url class. That's what I did. >pixiv (explicit) Just tested it right now. works for me.
https://www.youtube.com/watch?v=zBuJD-ugT_g windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v559/Hydrus.Network.559.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v559/Hydrus.Network.559.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v559/Hydrus.Network.559.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v559/Hydrus.Network.559.-.Linux.-.Executable.tar.zst I had a great week working mostly on one thing: converting the timestamps in the program from seconds to milliseconds. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html milliseconds tl;dr: time numbers better, you don't have to do anything Since the program started, it has tracked the various file 'times', like import and archive time, using a system that stores time with second precision. It knows that file x was imported at 2023-12-02 14:04:51, but nothing finer. This is common in many computing applications, but sometimes you want more. Particularly, in hydrus, whenever you have a fast importer that brings in two files within the same second, the program does not know, later, which was actually imported first (e.g. when you sort by 'imported time'). This can be annoying when you import a whole chapter of a comic and don't have good page tags set up yet--random pairs of pages will be flipped whenever you next try to sort those files by 'import time'. So, this week I converted the database to use a time system that has millisecond resolution. Any files imported from now on, or deleted, archived, modified, or 'last viewed', will now get a millisecond timestamp. You don't have to do anything, and nothing outside of the 'edit file timestamps' dialog will appear any different, but it may take a minute to update. This work was quite simple, but there was a ton of it, years and years of system build-up to go through. Rather than make an already messy system more complicated with a rough injection, I decided to invest the time and clean everything as I moved to the new format. I've tested it back and forth, but given the number of changes, I wouldn't be surprised there is a typo in there somewhere--if you see a file saying it was last viewed '54 years ago' or similar, let me know! Also, the Client API can now edit timestamps, including with the new millisecond precision. If you are an API dev, check out the new call (and 'edit file times' permission) here: https://hydrusnetwork.github.io/hydrus/developer_api.html#edit_times_set_time misc Just a side note, I have removed the sankaku downloader defaults from the program, so new users won't see them. None of them were working well/at all, especially in recent weeks. If you want to grab from complicated sites, and there isn't a clean solution on the shared downloader repo here, https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts , I recommend going with a more robust solution like gallery-dl/hydownloader or just finding the content elsewhere. If you ever run something like 'system:known url: regex = blah.com/(regex stuff)' and are annoyed at how slow it is, try adding a 'system:known url: domain = blah.com' pred in addition. This combination will now be explicitly much faster than just the bare regex (previously, it could be faster, by accident). next week Now we have ms timestamps and nicer code, I'd like to try tackling an 'edit timestamps' dialog that works for multiple files, including a 'cascade' command for force some clever sorts.
I was doing some work and wanted to sort files so that similar ones were next to each other. Would adding phash sorting help with this or is phash too simple for that? We already have hash sorting, so I think adding a phash option would be nice. Maybe even blurhash, since we have that now. Not sure how that one works though.
>>14397 >I was doing some work and wanted to sort files so that similar ones were next to each other. I wish. I think we're still waiting for "sort/group by alternates/duplicates".
>>14398 I think sorting by phash or blurhash would help with this a bit, unless it doesn't lol.
>>14270 Started using Hydrus 2 months ago when someone posted screenshots of it on /v/, I still have a lot of stuff to tag from my old "art inspiration" folder but it's so useful when I can type something like "pose:sitting" and get a nice assortment of references to use; or adding a couple of tags and find the exact photo I was thinking of. Related tags are a huge time saver as well, never could work that out as smoothly on other image organizers like XnView. Updating to the new v559 went smoothly, props to the dev of this great tool. There's something I'd like to ask, if I ever have to modify an entry (say cropping or rotating an image), is it automatically updated in the database or should I add the new modified image as a separate entry?
>>14400 The modified image will be considered a different image, you should re-import it and make sure to tag it. You will also probably need to manually set it as related to the non-modified version if you plan to keep the non-modifed one. If you overwrite the original from the place where Hydrus stores pictures (something like hydrus/files/f00/pic.png) Hydrus will see that as the file having become damaged.
>>14401 That clears it up, thanks!
>>14402 Note that if you do modify a file and then save it over the old one, when it finally notices it's "damaged", it will remove the file from your database and put it in a special bin for broken files. Hydrus recognizes files by their hashes and automatically changes the filename to the file hash when you first import it. If the filename and the hash later don't line up, Hydrus suspects the file is damaged. Not sure what happens if you save a new file manually into the database while also manually adding it's hash to the filename, but there's zero reason to do so, and it probably also similarly just eject illegal alien files.
>>14396 >Just a side note, I have removed the sankaku downloader defaults from the program, so new users won't see them. None of them were working well/at all, especially in recent weeks. Really? It still generally works fine for me, but I don't use it with an account. It will usually download images that don't require an account just fine. Occasionally it will download https://chan.sankakucomplex.com/redirect.png instead, but if I just wait a little bit or view the image in my browser first, that fixes it. I thought the people having issues were trying to use it to view account-only images.
Is there a solution for twitter yet?
>>14405 nitter.privacydev.net can view hidden stuff, but it's pretty slow. There is a nitter parser, but you'll have to make new url classes and gallery downloaders for this specific instance. Also I found out that the media tab doesn't show images that were posted as a reply, so it misses some stuff if you are trying to scrape whole accounts. I did some edits to the parser so it gets these images though, but it may still miss replies to other people. I could post it later when I can.
>>14406 Oh yeah, I think it also misses videos that are not "gifs", which I also fixed.
(5.68 KB 512x127 nitter.png)

>>14405 >>14406 >>14407 Here it is.
>>14394 >So do I understand correctly that right now sankaku, hentai foundry and pixiv (explicit) are all borked and there's no way to fix any of them? Pixiv recently started filtering adult content results based on the country listed in your account profile. Far as I can tell all this does is filters out loli/shota if your country is not set to Japan. Just change your country in your profile to Japan and that may fix your issue with explicit Pixiv.
>>14395 You're the man. I never dabbled into settings that far and figured since the issue's been there for a while it was known to a dev and is unfixable. Turns out the culprit is an added /en/ in path. >>14409 I just checked and you're right, mine was set for Finland for some reason, I don't even remember changing it. Gonna test it after I'm done catching up with my sankaku subs, but the thing is on site itself I still could see explicit content, but the gallery jobs only picked up sfw art.
I have two issues. First is with identical/nigh identical duplicates. I always want one of them gone, but for tracking purposes I don't want to delete them with "this one is better", and "these are same quality" option only sets the relation without deleting either. I checked everywhere but couldn't find where to alter that behavior, is it possible? Second issue is with pixel dupes in particular. In these cases I always want the smallest one, and it should be a no-brainer automated job where I can hold the "this is better" hotkey and pick the first one. But the first one in comparison isn't always the smallest one due to tag weights and whatnot. So, obviously, I tried maximizing the size weights to maximum. It didn't work. So I set all the other weights to 0. Still it would ignore and not apply weight. For the sake of experiment I tried with regular non-pixel-dupe comparisons, and lo and behold it works as intended. But for some reason when it's a pixel dupe pair the weight is ignored. Maybe it's only when the option "must be pixel dupes" is chosen.
>>14410 When pixiv started this they said they would automatically set your country based on your location. See: https://www.pixiv.help/hc/en-us/articles/26578893289881-I-can-t-browse-certain-works-from-the-country-region-where-I-live Weird that you can view but hydrus can't though. Only other thing I can suggest is to maybe re-import your cookies for pixiv. Possible they added a region flag in there now.
>>14411 I know you can set "this is better" without deleting if you do it through gallery view by rightclicking a selection and going to manage > file relationships > set relationship and picking the "this is better" option, but that assumes you know which file is which and it can be more tedious. Another option would be using the duplicate filter and then simply undeleting the worse ones after.
>>14405 >Is there a solution for twitter yet? Sadly no, no one has managed to shut down Twitter yet.
>>14411 >but for tracking purposes I don't want to delete them What tracking do you mean? If you're worried about redownloading inferior files you already marked, Hydrus keeps track of the hashes of deleted files. I think there's even a way to see permanently deleted files, but the thumbs and images/videos/etc will be replaced with a Hydrus icon.
>>14416 >I think there's even a way to see permanently deleted files, but the thumbs and images/videos/etc will be replaced with a Hydrus icon. Yeah, but the record and the tags are still there. You would have to re-download the rest to actually get it back.
(77.22 KB 1488x294 1.JPG)

(104.81 KB 971x627 2.JPG)

> Version 554 >the client looks for the cover image in your CBZ and uses that for the thumbnail! it also uses this file's resolution as the CBZ resolution I have several images that I add to a zip archive, which I rename as .cbz. Then I import it into Hydrus, but the preview does not appear. Am I doing something wrong? Thank you!
>>14417 Yes. But if he only wants it for tracking purposes and doesn't want the actual files, then that should be fine. Which is why I'm wondering what exactly his tracking purposes are.
Doing some AI tagging and the tagger failed leading me to look at my Hydrus log file. Error says malformed jpg file. v559, 2024/01/19 09:46:00: Problem generating thumbnail for "H:\Hydrus_Files\f79\79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682.jpg". v559, 2024/01/19 09:46:00: ==== Exception ==== DamagedOrUnusualFileException: Could not load the image at "H:\Hydrus_Files\f79\79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682.jpg"--it was likely malformed! ==== Traceback ==== Traceback (most recent call last): File "hydrus\core\files\images\HydrusImageOpening.py", line 10, in RawOpenPILImage File "PIL\Image.py", line 3309, in open PIL.UnidentifiedImageError: cannot identify image file 'H:\\Hydrus_Files\\f79\\79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682.jpg' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "hydrus\core\files\HydrusFileHandling.py", line 251, in GenerateThumbnailNumPy File "hydrus\core\files\images\HydrusImageHandling.py", line 337, in GenerateThumbnailNumPyFromStaticImagePath File "hydrus\core\files\images\HydrusImageHandling.py", line 166, in GenerateNumPyImage File "hydrus\core\files\images\HydrusImageOpening.py", line 14, in RawOpenPILImage hydrus.core.HydrusExceptions.DamagedOrUnusualFileException: Could not load the image at "H:\Hydrus_Files\f79\79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682.jpg"--it was likely malformed! ==== Stack ==== File "threading.py", line 973, in _bootstrap File "threading.py", line 1016, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 451, in run File "hydrus\client\ClientFiles.py", line 3153, in MainLoopBackgroundWork File "hydrus\client\ClientFiles.py", line 2796, in _RunJob File "hydrus\client\ClientFiles.py", line 2573, in _RegenFileThumbnailForce File "hydrus\client\ClientFiles.py", line 1746, in RegenerateThumbnail File "hydrus\client\ClientFiles.py", line 618, in _GenerateThumbnailBytes File "hydrus\core\files\HydrusFileHandling.py", line 50, in GenerateThumbnailBytes File "hydrus\core\files\HydrusFileHandling.py", line 255, in GenerateThumbnailNumPy File "hydrus\core\files\HydrusFileHandling.py", line 66, in PrintMoreThumbErrorInfo File "hydrus\core\HydrusData.py", line 932, in PrintException File "hydrus\core\HydrusData.py", line 963, in PrintExceptionTuple ===== End ===== Looking for the file, it does not appear to exist at H:\Hydrus_Files\f79 path. I have a handful of similar errors for other files as well. Not sure what the best path forward is here.
>>14415 I hate Twitter too but unfortunately it's the only place artists I like post.
>>14356 Thanks, yeah, I'll see what I can do. >>14357 >Can't you limit your scope so you're not looking for files missing "only a handful of tags", but instead files that are obviously under-tagged which are likely to get more tags later? Make sure it only checks files with "number of tags < x" once a year? Yeah, I think that's basically going to be it. When I finally make this system, I'll guide users to maximising the efficiency with pre-built queries they can easily click, just like this. You'll be able to queue up whatever you want and set it to work whatever speed you want, but I'll have some fairly rigorous training wheels in place to start with so new/ESL/penguin-of-doom users don't run into too much trouble by accident. >>14368 I heard about this ssl stuff from another user. In that case, it was one particular CDN of a wider site, I guess that one server had a newer ssl version or a different security policy or something. I don't know what the key problem/answer is here, but I'm pretty sure that it is because ssl is pretty 'core' to the python version you have, and the built releases are still on Python 3.10, which is getting a little old now. I am hoping to have a 'future' test build on 3.11 in the next few weeks. I tried to do it last week but multiple things exploded so I need to alter the build scripts a little bit. Best answer, I think, is just to wait for that, and let me know how it goes. As a side thing, I'm also planning to move from python's 'requests' library to 'httpx', ideally within the next six months, so we can move from HTTP 1.1 to 2.0. I feel like I blinked and five sorts of webshit moved on (in truth it has been ten years, ha ha ha). >>14369 Yeah, sorry, I'd like to add custom axes and stuff to it, but much of it is still mostly 'ah I wonder if this works' v0.1 code. I'll fix the deleted box, thanks! >>14386 Thank you, I will add this!
(293.21 KB 600x400 data4.png)

>>14397 >or is phash too simple for that Unfortunately, the problem is the other way around. phash comparisons are very complicated, and there is no simple way to collapse their differences down them to A-Z, or smallest-to-biggest, or whatever-to-whatever. I think, when I eventually get around to the next duplicate files overhaul, that I'll try to add grouping or even a pseudo-similarity-sort that tries to isolate and group 'kings' with the most similar files or something and then sorts those groups by individual hamming distance, but I have to think about it more. We could sort phashes by hex, like the current 'sort by hash', but if any of the differences between files are within the first, say, eight bits, then the sort for that group of similar files is going to be whack. Tricky problem all around. EDIT: I have an idea and will try something. If you are interested, the way phashes are stored and searched in the database is a VPTree. This was one of the coolest things I worked on during all of hydrus. I always like to post this image with the discussion, since it visualises probably one of the 'simplest' ways of 'sorting'/ordering this data. https://en.wikipedia.org/wiki/Vantage-point_tree >>14400 Thanks, I'm really glad you like it. Feedback from new users is always helpful, so if there was anything in the help that you found confusing, or generally in the workflow of 'onboarding', let me know! >>14403 >Not sure what happens if you save a new file manually into the database while also manually adding it's hash to the filename, but there's zero reason to do so, and it probably also similarly just eject illegal alien files. This counts as an 'orphan' file in my system. There's no worries about it, but you can use the database->file maintenance->clear orphan files to scan your file folders for them and collect them in another directory. They typically happen when people restore an older database to a newer file storage, or vice versa, when recovering a backup. Or when my final file-delete code fucks up. >>14404 Please feel free to keep using it as long as it works for you. Unfortunately you are in the minority, and sank often change their site to discourage downloading, often with large IP filters or browser-detection code and similar. I can't officially keep up any more.
Edited last time by hydrus_dev on 01/20/2024 (Sat) 21:51:31.
>>14411 1) Not sure if this helps your situation, but: This is a dirty secret of the duplicates system, but 'these are same quality' sets the same database relationship as 'this one is better', simply setting A>B, I think. On a technical level, my groups always need a 'king', so 'these are same quality' will bundle all the members of B's group under A. The main difference between these two settings is, as you have seen, the availability of delete, and the different metadata merge options. I think I've misunderstood the tracking you want, so if this is for something more complicated like assigning a particular delete 'reason' or something, please let me know. 2) Absolutely, we need some semi-automation and full-automation and better weight/scoring rules in the dupe filter. I hate the current hardcoded mess and I regret that I keep putting this big automation overhaul off, since it'll save everyone a lot of boring/simple work. Please keep reminding me! >>14420 tl;dr: right-click the thumb->manage->force filetype->set as cbz A core tenet of hydrus (currently) is that hydrus should be able to determine the type of a file regardless of its file extension. There are a bunch of technical reasons for this atm. So, if you rename a file, hydrus doesn't care. So, to determine the difference between a zip that happens to include some jpegs and a cbz, I wrote a custom scanner that has a bunch of rules. It looks for things like 'do the images seem to have common prefixes, and page numbers?' and 'does it have a random brush file?' and stuff, to discount various tool zips you might see on Deviant Art. The current code is here, if you are interested: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/core/files/HydrusArchiveHandling.py#L131 I've updated it several times in the past couple of months, and I'm broadly happy that we are capturing almost all legit typical cbz files you'd see on any comic/manga site. For weird cases, use the new 'force filetype' command (which I actually wrote specifically for cbzs!). Let me know if you run into any trouble. >>14423 Thank you, this is useful. It looks like that error is wrong, if it should be saying '404 file not found' etc... I expect that API call is working around the file load routine in an odd way, and I'll look at it. EDIT: I actually did just look at it now, and this thing should have given a 404 if the file was missing. Are you absolutely sure the file is missing, this isn't some weird Windows file-sort thing that is putting the file in an odd place because it is confused about '79' vs its neighbours '791' etc...? What if you paste 'system:hash=79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682' into a normal search page? If the file is truly missing, check out the 'help my media files are broke.txt' file in 'install_dir/db'. If the files do exist but they are busted, there aren't excellent solutions. You can send it to me, and I'll check it out. Some older versions of our image library was happy loading truncated stuff, but the newer version isn't (and I turned on some safety code). Just finding the file with 'system:hash=79a34f164708ed9ad7db4dbc68d2427aa36a7a687c693fb2179f39ec1412a682' and then deleting the record is probably the best answer. Or, if it seems to be truly completely whack-broken (e.g. if you inspect it with a hex editor, its data is now all zeroes; something that never would have imported ok to hydrus in the past, so the content must have changed since then), it may be it has been damaged by HDD failure, and then we are talking a different kettle of fish. In this case, you will want to right-click it and hit manage->maintenance->if file is missing/incorrect and has URL then try to redownload'. That may fix you up. If you are confident you have'' had hdd damage, and this is not just some weird booru file that happened to slip in one day, then you'll want to review the 'help my db is broke.txt' document as well, which has background reading on how to check everything else is ok. Let me know how you get on, regardless.
>>14428 Isn't that one of Hunter's paintings?
ah shit bros, here we go again I stupidly did a "git pull" while hydrus was still running and now hydrus won't start. Here's the crash when running in terminal (I don't even get a window to open when trying to run normally) Traceback (most recent call last): File "PATH/hydrus/hydrus_client_boot.py", line 19, in <module> from hydrus.core import HydrusBoot File "PATH/hydrus/core/HydrusBoot.py", line 3, in <module> from hydrus.core import HydrusConstants as HC File "PATH/hydrus/core/HydrusConstants.py", line 6, in <module> import yaml ModuleNotFoundError: No module named 'yaml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "PATH/hydrus/hydrus_client_boot.py", line 178, in <module> from qtpy import QtWidgets ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "PATH/hydrus_client.py", line 7, in <module> from hydrus import hydrus_client_boot File "PATH/hydrus/hydrus_client_boot.py", line 188, in <module> HydrusData.DebugPrint( 'Could not start up Qt to show the error visually!' ) ^^^^^^^^^^ NameError: name 'HydrusData' is not defined ~partialPATH
>>14431 Looks like the venv got messed up, try reinstalling it
I'm still learning how to use hydrus and i'm not a coder/programmer by any means. Is there a way to have hydrus sort through the images i have? I noticed a lot of duplicates because some of these images all have different file sizes but can i use hydrus to find and replace the inferior versions automatically?
>>14433 Hydrus doesn't know what's "inferior". Just because an image is larger in filesize or resolution doesn't mean it's better. It can have worse artifacting, or not actually be a duplicate, but an alternate from a set of images. It may be possible to automate in the case of perfect pixel dupes though, to always grab the smaller one if you don't care about metadata. Anything with the exact same hash is automatically combined, so you don't have to worry about that.
>>14429 I think I may have confused myself and the issue, apologies. I may have went through and did a delete in Hydrus on a few images that I found that threw an error before I went into the log and posted that excerpt. Here's another instance of the issue. This file does exist when I search for it, and does not open properly. Something's broken with it. v559, 2024/01/19 09:50:24: ==== Exception ==== DamagedOrUnusualFileException: Could not load the image at "H:\Hydrus_Files\fcc\ccd90fdb2a7eb885c9824e4e6a6397c05fca5ecef1b44378b8aa098a55fc6b23.jpg"--it was likely malformed! ==== Traceback ==== Traceback (most recent call last): File "hydrus\core\files\images\HydrusImageOpening.py", line 10, in RawOpenPILImage File "PIL\Image.py", line 3309, in open PIL.UnidentifiedImageError: cannot identify image file 'H:\\Hydrus_Files\\fcc\\ccd90fdb2a7eb885c9824e4e6a6397c05fca5ecef1b44378b8aa098a55fc6b23.jpg' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "hydrus\client\ClientRendering.py", line 249, in _Initialise File "hydrus\client\ClientImageHandling.py", line 33, in GenerateNumPyImage File "hydrus\core\files\images\HydrusImageHandling.py", line 166, in GenerateNumPyImage File "hydrus\core\files\images\HydrusImageOpening.py", line 14, in RawOpenPILImage hydrus.core.HydrusExceptions.DamagedOrUnusualFileException: Could not load the image at "H:\Hydrus_Files\fcc\ccd90fdb2a7eb885c9824e4e6a6397c05fca5ecef1b44378b8aa098a55fc6b23.jpg"--it was likely malformed! ==== Stack ==== File "threading.py", line 973, in _bootstrap File "threading.py", line 1016, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 451, in run File "hydrus\client\ClientRendering.py", line 259, in _Initialise File "hydrus\core\HydrusData.py", line 932, in PrintException File "hydrus\core\HydrusData.py", line 963, in PrintExceptionTuple ===== End ===== For this file, Hydrus displays a screenshot, but when I try to open the file, I get an error in Hydrus. And the file won't open from the file system in Irfan view. However, when I look at the file in mediainfo, it turns out to be a webm. Not sure why this was saved as a jpg, but is really webm. That seems odd. Complete name : H:\Hydrus_Files\fcc\ccd90fdb2a7eb885c9824e4e6a6397c05fca5ecef1b44378b8aa098a55fc6b23.jpg Format : WebM Format version : Version 2 File size : 2.96 MiB Duration : 15 s 140 ms
>>14434 Ok thanks for the info. I can sort them myself i just didn't want to put the time in if there was an easier method out there.
>>14432 Goddamnit, I can't even resolve some python dependency problem on my own :/ PATH /home/wir/Projects/Creativity/Collection/hydrus r::::::::::::::::::::::::::::::::::r : : : :PP. : : vBBr : : 7BB: : : rBB: : : :DQRE: rBB: :gMBb: : : :BBBi rBB: 7BBB. : : KBB: rBB: rBBI : : qBB: rBB: rQBU : : qBB: rBB: iBBS : : qBB: iBB: 7BBj : : iBBY iBB. 2BB. : : SBQq iBQ: EBBY : : :MQBZMBBDRBBP. : : .YBB7 : : :BB. : : 7BBi : : rBB: : : : r::::::::::::::::::::::::::::::::::r hydrus If you do not know what this is, check the 'running from source' help. Hit Enter to start. -------- Users on older OSes or Python >=3.11 need the advanced install. Your Python version is: Python 3.12.1 Do you want the (s)imple or (a)dvanced install? -------- Creating new venv... Requirement already satisfied: pip in ./venv/lib64/python3.12/site-packages (23.2.1) Collecting pip Obtaining dependency information for pip from https://files.pythonhosted.org/packages/15/aa/3f4c7bcee2057a76562a5b33ecbd199be08cdb4443a02e26bd2c3cf6fc39/pip-23.3.2-py3-none-any.whl.metadata Using cached pip-23.3.2-py3-none-any.whl.metadata (3.5 kB) Using cached pip-23.3.2-py3-none-any.whl (2.1 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 23.2.1 Uninstalling pip-23.2.1: Successfully uninstalled pip-23.2.1
[Expand Post]Successfully installed pip-23.3.2 Collecting wheel Using cached wheel-0.42.0-py3-none-any.whl.metadata (2.2 kB) Using cached wheel-0.42.0-py3-none-any.whl (65 kB) Installing collected packages: wheel Successfully installed wheel-0.42.0 Collecting cbor2 (from -r requirements.txt (line 1)) Using cached cbor2-5.6.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.0 kB) Collecting cryptography (from -r requirements.txt (line 2)) Using cached cryptography-41.0.7-cp37-abi3-manylinux_2_28_x86_64.whl.metadata (5.2 kB) Collecting dateparser (from -r requirements.txt (line 3)) Using cached dateparser-1.2.0-py2.py3-none-any.whl.metadata (28 kB) Collecting pympler (from -r requirements.txt (line 4)) Using cached Pympler-1.0.1-py3-none-any.whl (164 kB) Collecting python-dateutil (from -r requirements.txt (line 5)) Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting beautifulsoup4>=4.0.0 (from -r requirements.txt (line 7)) Using cached beautifulsoup4-4.12.3-py3-none-any.whl.metadata (3.8 kB) Collecting chardet>=3.0.4 (from -r requirements.txt (line 8)) Using cached chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB) Collecting cloudscraper>=1.2.33 (from -r requirements.txt (line 9)) Using cached cloudscraper-1.2.71-py2.py3-none-any.whl (99 kB) Collecting html5lib>=1.0.1 (from -r requirements.txt (line 10)) Using cached html5lib-1.1-py2.py3-none-any.whl (112 kB) Collecting lxml>=4.5.0 (from -r requirements.txt (line 11)) Using cached lxml-5.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.5 kB) Collecting lz4>=3.0.0 (from -r requirements.txt (line 12)) Using cached lz4-4.3.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.7 kB) Collecting numpy>=1.16.0 (from -r requirements.txt (line 13)) Using cached numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB) Collecting psd-tools>=1.9.28 (from -r requirements.txt (line 14)) Using cached psd_tools-1.9.30-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.7 kB) Collecting Pillow>=10.0.1 (from -r requirements.txt (line 15)) Using cached pillow-10.2.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (9.7 kB) Collecting pillow-heif>=0.12.0 (from -r requirements.txt (line 16)) Using cached pillow_heif-0.14.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.3 kB) Collecting psutil>=5.0.0 (from -r requirements.txt (line 17)) Using cached psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (21 kB) Collecting pyOpenSSL>=19.1.0 (from -r requirements.txt (line 18)) Using cached pyOpenSSL-23.3.0-py3-none-any.whl.metadata (12 kB) Collecting PySocks>=1.7.0 (from -r requirements.txt (line 19)) Using cached PySocks-1.7.1-py3-none-any.whl (16 kB) Collecting PyYAML>=5.0.0 (from -r requirements.txt (line 20)) Using cached PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB) Collecting Send2Trash>=1.5.0 (from -r requirements.txt (line 21)) Using cached Send2Trash-1.8.2-py3-none-any.whl (18 kB) Collecting service-identity>=18.1.0 (from -r requirements.txt (line 22)) Using cached service_identity-24.1.0-py3-none-any.whl.metadata (4.8 kB) Collecting Twisted>=20.3.0 (from -r requirements.txt (line 23)) Using cached twisted-23.10.0-py3-none-any.whl.metadata (9.5 kB) Collecting opencv-python-headless==4.7.0.72 (from -r requirements.txt (line 25)) Using cached opencv_python_headless-4.7.0.72-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.2 MB) Collecting python-mpv==1.0.3 (from -r requirements.txt (line 26)) Using cached python_mpv-1.0.3-py3-none-any.whl (44 kB) Collecting requests==2.31.0 (from -r requirements.txt (line 27)) Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Collecting QtPy==2.3.1 (from -r requirements.txt (line 29)) Using cached QtPy-2.3.1-py3-none-any.whl (84 kB) ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 6.0.0 Requires-Python >=3.6, <3.10; 6.0.0a1.dev1606911628 Requires-Python >=3.6, <3.10; 6.0.1 Requires-Python >=3.6, <3.10; 6.0.2 Requires-Python >=3.6, <3.10; 6.0.3 Requires-Python >=3.6, <3.10; 6.0.4 Requires-Python >=3.6, <3.10; 6.1.0 Requires-Python >=3.6, <3.10; 6.1.1 Requires-Python >=3.6, <3.10; 6.1.2 Requires-Python >=3.6, <3.10; 6.1.3 Requires-Python >=3.6, <3.10; 6.2.0 Requires-Python >=3.6, <3.11; 6.2.1 Requires-Python >=3.6, <3.11; 6.2.2 Requires-Python >=3.6, <3.11; 6.2.2.1 Requires-Python >=3.6, <3.11; 6.2.3 Requires-Python >=3.6, <3.11; 6.2.4 Requires-Python >=3.6, <3.11; 6.3.0 Requires-Python <3.11,>=3.6; 6.3.1 Requires-Python <3.11,>=3.6; 6.3.2 Requires-Python <3.11,>=3.6; 6.4.0 Requires-Python <3.11,>=3.6; 6.4.0.1 Requires-Python <3.12,>=3.7; 6.4.1 Requires-Python <3.12,>=3.7; 6.4.2 Requires-Python <3.12,>=3.7; 6.4.3 Requires-Python <3.12,>=3.7; 6.5.0 Requires-Python <3.12,>=3.7; 6.5.1 Requires-Python <3.12,>=3.7; 6.5.1.1 Requires-Python <3.12,>=3.7; 6.5.2 Requires-Python <3.12,>=3.7; 6.5.3 Requires-Python <3.12,>=3.7 ERROR: Could not find a version that satisfies the requirement PySide6==6.5.2 (from versions: 6.6.0, 6.6.1) ERROR: No matching distribution found for PySide6==6.5.2 -------- Done! PATH
>>14438 Python 3.12 is the problem here; the cleverer libraries like Qt (PySide6) have very little support yet. You may be in luck though, somewhat--I saw the very very new PySide6 6.6.1 just added Python 3.12 support, although how good that is I am not sure: https://code.qt.io/cgit/pyside/pyside-setup.git/tree/doc/changelogs/changes-6.6.1 So, to get working on your situation, try again, selecting 'a' for advanced and 't' for 'test' everything, except for the Qt question: there, write 'w' for write your own, and hit '6.6.1'. For QtPy, enter '2.4.1'. WARNING: I tried this the other day and noticed that in 6.6.1, there's a bug in my multi-column list system that makes it unable to size any column widths. They will all be ugly wide. I mean to fix this soon, but I haven't put time into it yet. Ideally, I think your better solution here is to figure out how to deploy this venv with Python 3.11, if possible. 3.12 is still pretty new, I'm afraid. Maybe you can copy/edit the 'setup_venv' script to say: 'py311 -m venv venv' -instead of- 'python -m venv venv' at the critical line where it creates the venv. I guess it depends on what symlink you have 3.11 set as in your PATH, if any. I am not sure what Linux usually symlinks any older version to, but if it is there, I think it'll be something like that. Let me know how you get on--I'll see about integrating this as a selectable option in the setup venv thing.
Anyway I can set ctrl+C in the tag/parent/sibling managers, search, autocomplete, and basically anywhere I have a tag selected to copy the full "namespace:tag". Right click -> copy -> namespace:tag is getting tiresome. I can't seem to find something like this anywhere in the shortcuts, and it just seems intuitive that if you can right click -> copy something you have selected that ctrl+C ought to do the same much faster. I know there's multiple copy options, but ctrl+C should at least do the most basic one that's listed first.
>>14433 It's not automatic but you know of the duplicates system, right? https://hydrusnetwork.github.io/hydrus/duplicates.html
>>14440 Yeah, this would be very handy.
It would be nice to be able to search by domain modified time and "has/does not have domain modified time".
Hey lads, hydev here, my internet died and it could be a week before it is fixed. I don't have a good solution for my dev logins to talk to the internet, so no release or posts for a bit, will update later!
>>14444 Do those even exist? I only see one global modified time.
>>14446 They're under right-click thumbnail > manage > times
>>14447 I know, but domain times are for stuff like imported/deleted times, not modified, which there is only one. I think I'm just getting semantic here.
(143.08 KB 1280x720 1389588279876.jpg)

>>14445 Understood.
>>14445 Well be waiting!
I'm curious how hydev feels about a verify system for the PTR, some way in which when other users run into a same hash that exists on the PTR they can send a note to the PTR saying hey, this file exists in the public domain I also have it, and then (eventually) garnering some form of verification to the files that exist on the PTR, this is a bit of a spitball to an idea that came to mind, could help with cutting down on the size of the PTR
>>14454 I'm a little confused. That doesn't sound like it'll cut down the size at all, because every hash will still have to be in the PTR for users to confirm it. And what would the point of verification even be?
>>14455 The general concept would be after a certain period of time without confirmation it would get pruned automatically, not sure what that time frame would look like. The point is to remove the bloat of hashes uploaded that nobody else has the file, which I imagine users have done so.
> https://8chan.moe/hydrus/res/20352.html#21117 Added colors to some lines. #!/usr/bin/python3 import os import subprocess import sys import tempfile import pyperclip # get the exported siblings or parents (a list of lines representing a table) #exported = sys.stdin.readlines() # From stdin. Be careful not to paste dangerous commands into the terminal! exported = pyperclip.paste().splitlines() # From clipboard. # node names are enclosed in double quotes, escape them (not tested) exported = [s.replace('"', '\\"') for s in exported] children = exported[::2] parents = exported[1::2] table = zip(children, parents) # make an actual table result = [] result.append("digraph {") # a directed graph color_isolated = " [color=red]" color_nonparent = " [color=blue]" color_nonchild = " [color=orange]" color_favnonchild = " [color=green]" # list of correct top-level parent tags favnonchildren = ["species:mammal"] for child, parent in table: if child not in parents and parent not in children: # line between tags with no other parents/children color = color_isolated elif child not in parents: # the child tag has no children color = color_nonparent elif parent not in children: # the parent tag has no parents if parent in favnonchildren: color = color_favnonchild else: color = color_nonchild else: color = "" result.append('"{}" -> "{}"{}'.format(child.strip(), parent.strip(), color))
[Expand Post]result.append("}") dotinput = bytes('\n'.join(result), encoding='utf8') # sent to dot's stdin, it returns PNG data dotoutput = subprocess.run(['dot', '-Tpng'], input=dotinput, capture_output=True) with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as png: png.write(dotoutput.stdout) pngname = png.name subprocess.run(['mimeopen', pngname]) # Open the viewer. input() # The next line removes the file, so the viewer may lose it. So we wait for user's Enter. os.remove(pngname)
>>14454 >they can send a note to the PTR saying hey, this file exists in the public domain I also have it I'd really like it if Hydrus doesn't automatically send notes to the ptr saying exactly which files each account has. If you're gonna open that can of worms, why not go all the way and only download the entries for files you have?
is there a login script somewhere for lolibooru.moe?
>>14461 I'd really like if you had a more charitable understanding of what I posted. It would be opt-in, mouth breather, and the same as any other part of the PTR, it wouldn't keep who uploaded it.
So it seems my twitter posts just suddenly and silently started failing... I found what >>14408 posted and imported it, and it seems like I can download from there if I manually look up and change the url to privacydev, but considering I most likely have quite a large backlog of failed twitter posts that I was not aware of, is there a way I could potentially batch-automate this kinda job? To sorta "re-direct" all these failed twitter links to their privacydev equivalent?
>>14464 Select all the lines in the log with failed twitter URLs. Right click --> Copy Sources. Paste the URLs into notepad. Do a find and replace for twitter.com with nitter.privacydev.net Select all and paste it into the hydrus URL downloader. Enjoy.
>>14462 >lolibooru.moe >Make a booru dedicated to loli on a .moe domain >You have to login to even look at anything questionable or explicit What's even the point? ATF's booru requires no account to view anything
Is it "safe" to discuss sankaku scraping here or do their admins lurk and patch discovered exploits? Because there is still a way to get static session-independent post URLs from gallery search page.
ive followed the instructions for the tagger on github but its not mapping them to files, im not sure what ive done wrong, anyone know anything about this?
>>14467 >static session-independent post URLs Do you mean file urls? There's never been an issue getting post urls from searches, they're not tied to sessions.
>>14469 the Sankaku beta scraper still works, you just have to get the security header from the web page code and stick it into Hydrus. It's changed every 48 hours. URLs still only last for an hour as well. Then you have to rescrape them.
>>14470 Really though, compared to site like Pixiv, there's not much on Sankaku.
>>14468 I use an old one that was the predecessor to Garbevoir's version. Use Garbevoir's. It should work well. https://github.com/Garbevoir/wd-e621-hydrus-tagger Just follow all the instructions to make a venv, etc. I've tagged about 2.4 million images with the old one. I set threshold to .10 and it works good. You'll get a few false positive tags, but their usually minor things you can ignore. The tags will show up in My Tags, so you can edit them there, if you wish. I haven't edited any. I'm not about to wade through 2 1/2 million pics! :p
>>14471 lol no. there's way more on sankaku
>>14463 Make your own PTR?
>>14472 Not the poster you were talking to, but am currently using Garbevoir's version for tagging now. I see the option, --threshold, what does this change in the tagging, when set to .10?
I'm constantly getting 403 when trying to download from danbooru, any way to troubleshoot it? I tried changing the UA string to my browser's and I added cookies/login too but it didn't help. I can access the site via browser just fine
>>14474 It's a good thing you don't run the PTR, and hopefully you don't procreate. My theory would have just as much knowledge of your files as the current PTR has, I'd recommend reading the wiki.
(212.51 KB 1855x835 1.PNG)

>>14470 >Sankaku beta Why use sankaku beta? The regular chan.sankakucomplex.com website works.
>>14476 Threshold sets the amount of "confidence" the AI needs to have in a tag in order for it to actually be sent to Hydrus. Lower means more tags, but less accuracy. Higher means less tags, but more accuracy. I find the default value works fine. There will always be a bit of inaccuracy even at higher threshold values, and you may find the added tags from lower values, inaccurate as they are, help in finding the file anyways. >>14468 If you're using the .bat files, could you paste the contents of the files you're using? without your API key, of course.
>>14463 >and the same as any other part of the PTR, it wouldn't keep who uploaded it And you can verify this, how?
>>14481 The PTR already keeps hashes of every file owned, independent of user information, all this would do is add a little number of people who have sent a 'verify' check, No personal information would need to be stored.
>>14468 >>14472 i fixed my issue, i didnt have the client api service enabled in hydrus. however i cant see to get CUDA working on it but im going to troubleshoot that more today
(40.97 KB 420x420 20240129_002607_SCREEN.png)

>>14439 inredastiing, seems like there's a typo in your script somewhere (?) picrel (= vs ==) it claims So instead I tried with modifying setup_venv.sh but it says '$py_command -m venv venv' instead of 'python -m venv venv' at the place I think you meant. Should I instead just replace every instance of 'python' in he script with 'py311'? Have to give up for today sadly. YALL MOTHERFUCKERS READY TO GO BACK TO WORK TOMORROW?
(86.23 KB 882x356 20240129_005211_SCREEN.png)

>>14439 >>14484 Well, I thought I had it by installing the package called "python3.11" and then making sure to put "python3.11 at the places in the script, but alas, pircrel struck again and upon trying to run hydrus client I just get the same old error about a whole bunch of modules not being found.
Almost bare-bones parsers of the description and source for derpibooru. I suggest using one of these as an example image for the parser: the files are small, they have descriptions and sources: https://www.derpibooru.org/images/2492858 (it has a blockquote) https://www.derpibooru.org/images/113706 https://www.derpibooru.org/images/1453599 [26, 3, [[2, [30, 7, ["description", 18, [27, 7, [[26, 3, [[2, [62, 3, [0, "div", {"class": "image-description"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]], [2, [62, 3, [0, "div", {"class": "block__content"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 2, "href", [84, 1, [26, 3, [[2, [55, 1, [[[9, ["<div class=\"block__content\">", ""]], [9, ["<p></p>", ""]], [9, ["<div class=\"paragraph\">(.*)</div>", "\\1"]], [9, ["</div>$", ""]]], "<div class=\"block__content\"><p><em>No description provided.</em></p></div>"]]]]]]]], "booru description"]]], [2, [30, 7, ["source", 18, [27, 7, [[26, 3, [[2, [62, 3, [0, "div", {"class": "image_source__link"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]], [2, [62, 3, [0, "a", {"class": "js-source-link"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "href", [84, 1, [26, 3, []]]]], "specified source"]]]]]
>Are you sure? >Add [tag] to the favorites list? How do I remove this dialogue that appears every time I want to add a tag to favorite? Having to confirm every time is super annoying.
>>14476 The main reason I use threshold of .10 is that it will detect loli quite well. I can't stand the stuff, so I use it to erase files with that tag after the scrape. It may get a few false positives on realistic photo's though. Other than that though, .10 does a good job on the major tags. I haven't found any really off tags yet.
>>14489 A lot of loli isn't tagged these days, so if you don't like it, you'll want to go with .10 to detect it. I usually get about 500 loli tags / 10000 files. All of them were missing a loli or lolicon tag.
ok hdev, got an issue that I caused myself a long time ago. I had a small hdd, I needed to make space to buy time for a month till I could afford a bigger drive, and the files I was able to 'delete without issue' was the tag repository files as they have already been parsed. at some point, it wanted to re look at the old ones, saw they werent there, and the tag repository was more or less never updated since. however I got larger drives and I have been tryint to get it working again for a while, I get this ------------------------- v559, win32, frozen Exception An unusual error has occured during repository processing: a content update file (12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. I recommend you run _database->maintenance->clear/fix orphan file records_ too. Please then permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository. Traceback (most recent call last): File "hydrus\client\ClientFiles.py", line 1599, in GetFilePath def init( self, controller ): File "hydrus\client\ClientFiles.py", line 1044, in _LookForFilePath hydrus.core.HydrusExceptions.FileMissingException: File for 12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009 not found! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientServices.py", line 2195, in _SyncProcessUpdates content_update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes ) File "hydrus\client\ClientFiles.py", line 1603, in GetFilePath self._pubbed_message_about_bad_file_record_delete = False hydrus.core.HydrusExceptions.FileMissingException: No file found at path I:\Hydrus\f12\12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientServices.py", line 2201, in _SyncProcessUpdates raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) ) Exception: An unusual error has occured during repository processing: a content update file (12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. I recommend you run _database->maintenance->clear/fix orphan file records_ too. Please then permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository. ----------------- so I do the maintenance, I clear the orphans, however network sync never re adjusts itself, it never resets, it never tries to redownload the old files, its essentially a feedback loop, so im wondering if there is something wrong on my end, did I do something wrong in resetting it, or did I do something so stupid that the un fucking yourself method has never been tested and its just not working correctly?
Hey hydev, >>>/hydrus/21085 again, the issue's happening again. Using extremely scientific and rigorous testing (spam opening the same video over and over & spam paging through a list of videos) It seems to be every other video I open doesn't work. Even if it's the same video opened twice in a row (e.g. open vid, blank, close, open vid, plays, close, open vid, blank, etc). Still no idea why it only works sometimes. Nothing shows up in the logs, it isn't dependent on any work being done. It has gone away, at least temporarily, after doing a restart. I have been running into an issue with X11 having too many clients, which restricts you from opening any more until something gets closed, one of my always open programs (not Hydrus) misbehaves after a while and it's troublesome. However the issue persists after fixing the misbehaving program and nothing else has long term aftereffects so IDK. I'll keep an eye on Hydrus as well as I can since I'll be afk for a while. One last thing about the windows, media_viewer was set to fullscreen & maximized, I turned it off because I didn't want that behavior anyways.
>>14477 They did something to prevent scraping. They loosened their rules a bit and I got it working not too long ago. Did they make the block more strict again?
I think I found a bug? When I open an image it starts unmaximized. If I maximize it and then close it the image viewer, the media_viewer setting saves that and opens all other images maximized despite the fact that I have save position & save size disabled. Using Linux & awesomeWM. If this is the expected behavior and it's not a bug could there be an option to explicitly not start the viewer maximized?
>>14501 Oh, I forgot to mention that the start maximized button gets re-enabled when I close the image while it's maximized, which is what I think the bug is. That's with the save position & save size options deselected.
>>14469 >>14479 I thought there are currently two problems with sankaku: 1) hindered subscription scraping by spamming queue with new "page" urls that are duplicates of already downloaded file urls. 2) some kind of rate limiting or mass download detection that makes some "page" urls download a placeholder logo instead of real file unless opened in browser first. Or am I misunderstanding the issues?
>>14503 >1) hindered subscription scraping by spamming queue with new post urls that are duplicates of already downloaded post urls. I solved this by just updating the gallery parser to provide old-style urls. >2) some kind of rate limiting or mass download detection that makes some post urls download a placeholder logo instead of real file unless opened in browser first. This is correct. In my limited testing it seems to mainly happen with files that have larger filesizes, something like 1 MB or more. To be clear: "gallery url" refers to a search page with multiple files, such as https://chan.sankakucomplex.com/en/posts?tags=rating%3Asafe "post url" refers to the page for a single file, such as https://chan.sankakucomplex.com/en/posts/PVaDw30G8Mb "file url" refers to the direct link to an image, such as https://s.sankakucomplex.com/data/bb/11/bb119d2eb2746a335d14bff900ec3a45.png These are the names that hydrus uses.
>>14501 >save position & save size disabled I think that only applies to custom window sizes and positions. Maximize and fullscreen are separate functions from that. Not sure if there's a setting to disable that.
Is it just me or does pixiv no longer require you to be logged in to get full-size images?
Some of e621's images are login-only. Example: https://e621.net/posts/2869855 But it turns out they can be downloaded without an account because you can frankenstein together the image url from stuff in the html. Here's a parser that does that. Note that this just a file page parser, not a gallery search parser - searches will still exclude login-only images. Details: there's a bit of json inside the html that contains the md5 and the extension among other things. This parser adds a subsidiary page parser that builds the url from that json. Also, I recently found out about this website: https://e621.cc/ If you clear the global search blacklist at the bottom, it's capable of displaying login-only images. I may try to make a parser for this.
Also, is it possible to export a subsidiary page parser alone to clipboard/image like you can with a content parser? I don't see a button for it.
If Hydrus crashes while importing files from a folder, it doesn't lose the files, but it does lose the import progress. This causes it to read all the files it got before the crash as already in the database, which makes it difficult to see what actually was already in the database when it throws you a "234 files already in db".
These two files were ignored on import. They're both very large gifs, both explicit. They both are too large for catbox as they specifically restrict gifs to 20MB, but I can keep them up on litterbox for 3 days. https://litter.catbox.moe/ki3xg4.gif https://litter.catbox.moe/qtiqcq.gif
(1.98 KB 675x36 Capture.PNG)

>>14510 Click on the file log button to see the reason a file was ignored. The import options have a maximum gif filesize setting, which defaults to 35 MB, so that's probably what's happening here. You can change the default import options in file > options > importing. You can change the import options for a particular page by clicking on the import options button.
>>14509 >crashes cause minor issues whoda thunk it
(180.94 KB 750x750 please respond.jpg)

I can bet this question was probably answered already on the FAQ but if it was, I couldn't guess the wording to locate it. Basically, I have a NAS attached to my computer (which is running a Windows OS). Currently in my NAS I have a location with a folder where I store all of my pictures. Now, in my head, I have this image of the perfect way that Hydrus will work for me, in that I can install Hydrus on my computer but the database it references is located on the NAS, which is where the pictures are also located. That way I don't have to worry about my computer drives dying or being destroyed as long as the Hydrus database is alive on the NAS along with the pictures, and I am unlikely to move those pictures anywhere so it should stay in place and not get messed up with a parent folder moving or some other form of restructure occurring. That way if I get a new computer, all I need to do is install Hydrus and reference the existing database on the NAS. Is this possible with Hydrus? Hopefully I was able to convey what I want succinctly.
>>14513 I'm not sure about a NAS because I have no experience with those. But it's certainly true that the pictures don't have to be right next to the install. I have my pictures on a separate drive that I plug into my computer which has hydrus installed. This help page has an overview of the database and splitting it up, as well as the launch arguments to use. https://hydrusnetwork.github.io/hydrus/database_migration.html
My internet came back today, and despite the problems, I had a great couple weeks of work. The manage times dialog can now operate on many files at once, and it can even set incrementally increasing times to files to force particular file sorts. There are also some miscellanious bug fixes and quality of life improvements. The release should be as normal tomorrow.
>>14508 I needed this a bunch of times before and I don't think you can. It would be great if you could do that. >>14509 There's an option to display only new images in file import options (where you filter filetypes and filesizes etc.) at the bottom. You can also pick the silent preset.
>>14511 Danke. I didn't notice I could right click -> copy notes. Both Hydrus and Catbox set lower size limits for gifs than other files. Why do some places not like large gifs?
>>14489 Thanks for the update. Do we know what the default threshold is?
Does anyone know why a bandwidth limit that I set for a domains is just being automatically overridden by download pages? I checked and I don't have the "auto override after 5 seconds" enabled, but it's just doing it anyway. I know really even know how to troubleshoot this.
https://www.youtube.com/watch?v=UBYThAP_C5A windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v560a/Hydrus.Network.560a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v560a/Hydrus.Network.560a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v560a/Hydrus.Network.560a.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v560a/Hydrus.Network.560a.-.Linux.-.Executable.tar.zst I was suddenly without internet for a while, so there was no release last week, but everything is back to normal now, and I had a great couple weeks' work. The 'edit times' dialog can now handle multiple files. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html editing times for multiple files at once If you launch 'edit times' on a single file, everything is the same. If you open it on a larger selection, it will now show, in a summarised way, the range of each particular time type, and how many files in the selection have or do not have a time set there. The controls all work as before, but in general, when you set a time, all files are now locked to that new time. Give it a play and you'll see how it all works. SInce we don't always want to set exactly the same time to a set of files, but rather ideally a similar, staggered time, there is also a new 'cascade step' system in the multiple file version of the dialog. Every time-edit you open up has the ability to enter a little step, say 100ms, which will cause the dialog to set each successive file in the selection to be that much later (or earlier, negative values are allowed!) than the last. This way, if you have a bunch of files of something contiguous, like a comic chapter, that all have different, merged, or otherwise awkward import times, you can now manually sort them in the UI using tags or hackery and apply a new import time based on the first file plus a few milliseconds each time. Thereafter, any time you sort those files by import time, they'll also be nicely in page order time, just as if you had imported them all together from one directory. I am quite happy with the new dialog, and I plan to copy some of the new techniques I figured out to other dialogs for similar better multi-file handling. I'd also really like, as we have discussed before, to tackle a 'cascading' tagger in 'manage tags', so we can quickly set 'page:n' tags to many files at once. misc In v559, it turns out I did make a '54 years ago' mistake, on moving files from one local file domain to another. This is fixed, and on database update, any borked timestamps that were saved should be corrected too. Let me know if you encounter any more issues. Mr Bones and the File History Chart are now under the database menu. There's a new checkbox under 'files and trash', 'Remove files from view when they are moved to another local file domain'. This actually re-introduces the unintended behaviour we had a couple weeks ago, which was happening under 'remove when trashed', but targeted to the correct logical trigger. The gallery downloader cog button now has a 'don't add new dupes' option, which discards the query_text/source pairs you add if they are already in the download list. If you need to paste a hundred queries from a big mishmash list, I think this will help. As a very brief test, I've added 'sort files by pixel hash'. If you have some pixel hashes in your current query, this will make them stand out, but the rest of the sort data is meaningless. I had wanted to try out some similar hacky new tech with 'sort by perceptual hash', but thumbnails don't have quick access to that data yet, so it'll have to wait. I wasn't sure when my internet was going to come back up, so I dumped a bunch of time into some longer cleanup jobs. There's a ton of updated code behind the scenes, particularly around the way files get informed about new metadata, but nothing you need to care about. I have some new plans here and can see the path to getting all sorts of new metadata commands integrated into the shortcut and undo systems. I changed a couple thousand lines, so there may be a typo bug somewhere--let me know if you try to set any unusual content changes and it throws an error! next week Now we have tech that relies on file order, I think it is time we get the ability to conveniently reorder thumbnails. This will also help when I figure out cascading tagging in 'manage tags'. I'd love to have reordering by simple drag and drop, but that may be too complicated to slip into one week. I'll see what I can do. I have quite a few messages to catch up on, which I will try to tackle Saturday.
Edited last time by hydrus_dev on 02/03/2024 (Sat) 19:50:19.
Hi. I'm making a parser for a website where multiple post pages will refer to the same image link. (Not the same image from different sources, but the exact same image from the exact same URL.) I want to associate data from ALL the multiple posts (e.g. their urls, tags, etc) onto the same image link/download. However, the behavior I currently have is as such: The first post downloads fine with associated post data added to the downloaded image. But, when downloading/parsing a second post, it 'recognises' the image URL as already downloaded, which is fine, but it doesn't associate the data from the second post with the image. Is there anything I can do?
>>14499 I ran the login script test and the HTML looks like the Cloudflare challenge page Another downloader which I used to use is running into same issue, and it identifies the issue as cloudflare too sucks https://danbooru.donmai.us/forum_topics/25845
>>14522 I remember you liked the idea of indicating which tabs in manage tags have tags.
>>14524 I think you have to copy a cookie from your browser into hydrus. I don't know what browser you're using, but in waterfox (firefox) you can inspect element, go to the storage tab and copy paste the cookie values into hydrus under network > data > review session cookies and set the expiration date to something like 3000 days.
Is there any way to set certain tags as default for new file search pages? I want certain tags (for example -meta:wip) to always be excluded on new searches, unless i remove it manually.
>>14529 Those closest I can think of is favoriting a search. It's not that useful if you're only doing it for a single tag though, just possibly very slightly faster.
>>14531 Try adding your session cookies to Hydrus, it's possible that gelbooru is filtering some content to "public" viewers. This has happened to me with downloaders several times where about 10% of content are missing without logging in/setting a "everything" filter.
>>14532 I deleted my post after digging a bit more. It's delayed for 24 hours due to bandwidth limitations.
On certain files in the duplicate filter, I get these error messages: v560, win32, frozen TypeError object of type 'ContentUpdatePackage' has no len() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3261, in ProcessApplicationCommand File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in _MediaAreAlternates File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2896, in _ProcessPair File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3130, in _ShowNextPair File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2981, in _ShowCurrentPair File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3331, in SetMedia File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1287, in SetMedia File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2691, in _GetIndexString File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in _GetNumCommittableDeletes File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in <listcomp> v560, win32, frozen TypeError object of type 'ContentUpdatePackage' has no len() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 858, in paintEvent File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 501, in _DrawBackgroundBitmap File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2666, in _DrawBackgroundDetails File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1699, in _DrawBackgroundDetails File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1714, in _DrawIndexAndZoom File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2691, in _GetIndexString File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in _GetNumCommittableDeletes File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in <listcomp> v560, win32, frozen TypeError object of type 'ContentUpdatePackage' has no len() File "hydrus\client\gui\canvas\ClientGUICanvasFrame.py", line 38, in closeEvent File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3356, in TryToDoPreClose File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in _GetNumCommittableDeletes File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in <listcomp>
>>14535 The errors seem to trigger on duplicates with more than 1 potential pair.
(13.62 KB 447x264 image.PNG)

>>14523 From what you're describing it sounds like it should just work but with no example I can't test anything. In the post page url class, checking "post page can produce multiple files" might work?
Getting a serious error in duplicate processing. v560, win32, frozen TypeError object of type 'ContentUpdatePackage' has no len() Traceback (most recent call last): File "hydrus\core\HydrusPubSub.py", line 140, in Process except HydrusExceptions.ShutdownException: File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2232, in CloseFromHover File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2225, in _TryToCloseWindow File "hydrus\client\gui\canvas\ClientGUICanvasFrame.py", line 38, in closeEvent can_close = self._canvas_window.TryToDoPreClose() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3356, in TryToDoPreClose def qt_continue( unprocessed_pairs ): File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in _GetNumCommittableDeletes return File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2730, in <listcomp> return TypeError: object of type 'ContentUpdatePackage' has no len() The progress text for duplicate processing turns into a single hyphen, and the information from the previous comparison will not update to the new comparison. It always happens at the same place, and I've committed my decision on the file just before that place. It will crash after a few moments, or if you try to see the second file in the comparison, and will still crash if you pick an option and move on to the next comparison. I have no trouble loading the file and its potential duplicates in a normal page. I brute forced it by setting the relationships there through right click menus, which cleared the comparison from queue. However, every subsequent comparison seems to have the same issue. I've manually cleared out three comparisons using the view potential duplicates option, but I still have a few hundred to go. Thinking the issue might be with the files from which the old comparison info will not update from, I also tried brute forcing that in the thumbnail viewer. This resulted in the that first comparison getting cleared from the queue. Instead of immediately crashing, the next, now the first, comparison functioned just fine, but the duplicate processor still crashes when I try to go to the next comparison after that. I tried resetting potential duplicates and re-running the dupe filter preparation function, but the issue still persists.
>>14542 Oh hey, someone else is having the same issue. >>14535 >>14536 Did you find a way to get passed it so you can process other duplicates?
>>14535 >>14536 >>14542 >>14543 Hey, sorry for the trouble with the duplicate filter! I screwed up a line in all the rewrites and the error is particularly bad. There's a hotfix here: https://github.com/hydrusnetwork/hydrus/releases/tag/v560a Let me know if you have any more issues!
>>14545 Thanks a ton for the quick fix. Any other program and I'd probably be waiting a month for someone to tell me they're "working on it".
Is there any way to have individual tag services treated similiar to a namespace visually and via search? I'm pulling in some external tags, and I have my own manual tags but it doesn't really work when they're both mixed together indistinguishably.
Hey just catching up with my messages now I have internet back properly. I may have to finish tomorrow, thanks for your patience. >>14435 For this file, I think you'll want to right-click it and say manage->maintenance->regenerate file metadata. If it then comes up as a webm and all works, great; if it doesn't, then you can delete it as just some crazy file, or I'd be interested in seeing it so I can see why hydrus is screwing up. If you can upload it here or just post a source URL, that's great, or send me an email or hit me up on discord and we'll figure out a private way to transfer it otherwise. >>14484 >>14485 Thanks, sorry--I messed that up, exactly the = vs == as you say. Should be fixed in the v560 now. I haven't looked into the custom-python-from-script idea yet, but I'm still hoping to fit it in so we can handle this stuff better in future. >>14440 >>14443 Thanks, I'll slip this in, should be easy. I have a longer term plan to move the taglists over to the customisable shortcut system too. >>14444 Great idea, I also want domain-filtered sort at some point. >>14446 >>14451 Yeah, any time you download a file from a website that parses a 'source time', which is usually something like the booru post time, that gets added as a new modified time for 'safebooru' or whatever. These are aggregated and in UI you only ever see/sort/search the most early of them. >>14454 I had some similar ideas here when I once planned out a 'ratings' repository where we'd be able to share and normalise an average rating for files. In general, although I think this stuff is broadly a good idea, I just don't have the coding time to build or maintain anything but the basics. Any time I think about diving into the hydrus network protocol again for an update like new file metadata or something, I shudder at how much of a pain it will be and how many new problems will spin out, so even if an idea is great, I often just can't entertain it. Ultimately, in future, I would prefer to move away from large scale sharing tech like the PTR. It has been a fantastically successful experiment, but it isn't sustainable, and I want to grasp this giant corpus we have worked on and hone it down to more useful systems. (e.g. training AI auto-taggers on it). Many tags like nouns are becoming easier and easier to recognise quickly, and the situation will presumably be bonkers in five years, so rather than build a very efficient 'list of samus aran files' repository, I think I'd like to move to a better 'samus aran manager', which involves better processing workflows and tech like deduplication. That said, in terms of sharing data, I want to move to friend-based client-to-client communications, through the Client API. Getting a few thousand Anons to agree on things in a central repo is tricky in a number of ways, but when you share your Goku fan pics with your friend, he knows what he is getting into.
>>14456 Oh yeah to add to this specific idea, I have had this same rough idea for purging old hashes. It is tricky, but I think there's a way of doing it. I wouldn't do it in an ongoing process, but I'd have the admin trigger a 'recycle reset' and users who wanted to help out would let the server know what was still a good file, and then after 180 days or something, any hashes that were not pinged by anyone are now delisted. EDIT: Sorry, I realise now this is what you were actually talking about, I didn't read it properly and thought this was more about multiple clients verifying tags through multiple 'voted' upload. >>14460 Thanks, will try! >>14477 Ah, this is a shame. I noticed it was seemingly working again a week or two ago. Even if things loosen up here and there, I guess the danbooru admins are moving in the direction of tightening up access to automatic downloaders, so things are probably going to be rocky for us going forward. I don't personally keep up to date with any booru politics, so if anything important happens, please keep me updated. Since this is Cloudflare tech, and it seems to be the CF tech that hits some people and not others (it is a few days later, but works fine here), my general experience with it is it is region/ISP based, so moving your VPN to a different location can be helpful. Otherwise, you basically just have to wait it out--the rules appear to be applied dynamically by some automated system, and they are undone similarly. >>14487 Thanks, I'll roll these in to the defaults! >>14488 Thanks, someone else mentioned this too, I will remove it. >>14494 I am confident this is a fixable solution, so no big worries. Although it sounds dumb, I know that big errors like that apply some rare pause states. I was fixing a guy earlier on today with the same issue. Check services->pause menu to see if all your repo work is paused. Otherwise, check the multiple pause/play buttons in the review services PTR page--any of them paused? If it isn't as simple as the thing being paused, what is the service's status text in review services? Normally it says 'service is functional' in black text, but if there is an error (e.g. 'all services are paused' or 'Download cancelled: blah blah connection error'), it'll says that in red text. What does it say there? Once you have things set up you think to how they should be, hitting 'refresh account' is the best way to 'jog' it back into action.
>>14495 Interesting. I think if you ctrl+f the last thread or two, you'll see me talking about how I secretly have two always-on mpv windows in most clients, and switch back and forth as needed, and so when you see 'one video works, the other doesn't, over and over', like you are seeing, it tends to be because mpv window A is broken but B is fine (and thus a restart fixes it). I wonder if this is what is happening. These issues can often be aggravated by GPU drivers sperging out through a system hibernation or similar, failing to restore one or the other mpv windows after wake. If it helps you play around with this, you can sometimes force loading a particular mpv window by opening multiple media viewers at once or going back to the main gui after the media viewer is open and clicking on something to load it in the preview viewer. Imagine that hydrus simply has a pool of mpv players, and any time it needs a new one, it grabs from the pool (or if none, creates one), and every time it moves on from a video, it gives it back to the pool (after a very brief delay). Most of the time, this means two mpv players, but you can force three or more if you try. Although it sucks, if you notice this happening in your X11 after system sleep or just long use or whatever, the best solution may be just to restart your client every day. I am planning better and more safe mpv integration in future, but it'll be a long way off, so I can't promise much from my end to fix the current system. I basically just call a library to implant the mpv window in my UI and pray that it works. Note that the reason I recycle mpv players like this is if I ever try to destroy on mpv window, most of the time I get a crash within a second, wew. I'm interested though, when you have an mpv player that is presumably borderless fullscreen or otherwise sperging out, can you alt-tab your way somehow to seeing the media viewer 'beneath' the hydrus media viewer below it? Maybe at a different size, or not borderless? I know that when mpv goes wrong on Linux, it is often de-embedded from my actual UI often, and is just a floating window somewhere. Anything like that?
>>14522 >>14550 >Thanks, I'll roll these in to the defaults! If you meant to include the parsers, then the HTML paragraph thing does not seem to be working.
>>14551 >Although it sucks, if you notice this happening in your X11 after system sleep or just long use or whatever, the best solution may be just to restart your client every day. NTA, I have two displays and if I turn a display on and run xrandr --output ... --primary --auto --above ... and move the window to the new secondary display, menus do not appear. The display works without that, so maybe I should only run xinput and not xrandr.
Version 558 needs a 'special update' note stating you need to update to 558 before updating to 559 or later. If you don't, earlier database migrations can fail due to the timestamp_ms column not yet existing. This happens at least for the database update to 548 when running 559 or later, but it's possible it happens for more versions between 548 and 558 too. See GitHub issue 1512 for someone else who also encountered this error (and included a full traceback).
Can Hydrus not snag ugoira animations from Pixiv? I dropped a url in and it got ignored with the note, >veto: ugoira veto
>>14501 >>14502 >>14505 Thank you for this report! I played around and see you are just talking about window size (stuff in options->gui), not the actual image inside the window (stuff in options->media). Yes, it looks like I am saving maximised status even when remember size/position are off! I will fix this. >>14507 Thanks, I will play with this and see if I can roll it nicely into the defaults! >>14508 >>14516 It is strange you cannot. I bet it is because the separation formula and parser proper are separate objects and thus I don't have a nice single thing to serialise to the clipboard (my import/export buttons are programmatic, not hardcoded every time, and often need things coming in and going out in a certain way). I thought, though, that I had hacked some code to make it easier to do this sort of thing, so I'll have a look. >>14509 Yeah, I'm sorry, I can't guarantee nice behaviour here, after a crash. The import folder only saves its progress when the whole run is done, so if the program shuts down non-cleanly, you'll get this. Rather than rewriting the import folder to save more atomically, making things nicer after a crash, can you talk more about your crashes? I'd much rather try and relieve that so you aren't having to worry at all. What's your platform, and does any area of the program or action tend to aggravate the crashing, as far as you can tell? If you are on Linux and sync with the PTR, do you notice the client suddenly eating 16+GB of memory after some maintenance operations, and is that correlated with crashes at all? Unfortunately, crashes--where the program suddenly halts and exits, with no clean shut-down--do not write anything to the log, and due to the language I write hydrus in, it is tricky to get good information out of OS crash dumps, but if you check your log (in your install_dir/db folder), there may be a bunch of unusual error messages in the lead-up to certain crashes, particularly Qt 'paint' stuff. Let me know if you see anything in there! >>14510 >>14518 >Why do some places not like large gifs? This doesn't happen so much any more, but when webms were still new and not always supported by either codec support or CPU power, some geniuses would convert 10 second 720p overwatch sfm loops and the like to gif (often through dumb automatic converters) so ye olde phoneposters could still see them. What was a fairly neat 30fps 7.9MB webm would become a 12fps 93MB gif with a garbage colour palette. They got re-uploaded to boorus and it was a spammy hassle. It is a bit like when an artist puts out ten variants of a picture of an anime girl on their patreon rewards in 16,000x22,000 png format--just a wasteful format put out by someone trying to do something helpful but who just doesn't know enough about the levers they are pulling. Some months ago, I turned off that that 'max gif filesize' default for new users, since this isn't such a big deal any more. >>14513 Just as a side thing, while hydrus can absolutely store its files and stuff on a NAS, and several users do this and are happy, if you did not know: hydrus does not manage your files and folders as they currently are. It sucks them up and copies them into its own technical non-human-friendly storage system. If you haven't tried hydrus out, I recommend you just spin up a copy on your desktop or something and drag and drop a thousand files onto it. By default, it won't delete the files from their original locations, so this is 100% safe and non-destructive to do. Have a play around with the program, go through the getting started guide, and see if the program is for you. If you absolutely need a human-friendly copy of your files, with regular files and folders, some hydrus users figure out some duplicated solutions for certain precious files where they need it, but in general hydrus is not for that sort of thing. >That way if I get a new computer, all I need to do is install Hydrus and reference the existing database on the NAS. Yep no worries, hydrus is completely portable and as long as you can still see your files through some normal directory, it is simple to relocate to another computer. Let me know how you get on! More reading on the specific filename question, if you weren't sure on this and want to know why: https://hydrusnetwork.github.io/hydrus/faq.html#filenames https://hydrusnetwork.github.io/hydrus/faq.html#external_files
>>14521 Is it overriding on the 'gallery download' step, where it walks through all the gallery pages of a search to gather Post URLs? I think I have a ~30 second override on this, forced, because gallery contents will shift by a file or two whenever a user uploads a new file that matches that search. If I were to allow a four-hour break on a search like that, and some user uploaded 150 files in the meantime, we'd lose our position on resume in a complicated way. I don't think this time is editable, but if you like you can force a mandatory wait between gallery fetches under options->downloading. Maybe adding 90 seconds would fit your rhythm better? Let me know if this isn't--I can expose these values for editing, although I'd be wary of allowing anyone to turn it off completely. If you really are getting bandwidth override on file downloads and don't have that hacky 5-second override set, let me know, since in that case something is definitely screwing up. >>14523 >>14539 Yeah, odd situation. It is strange that in your 'already in db' situation it isn't associating the metadata. Normally, hydrus will assign all parsed data on an 'already in db' result, BUT as soon as 'already in db' is set, it won't do any more network requests, so if you have a complicated download 'chain' here that relies on multiple URLs being downloaded, maybe that's what's happening here? Setting the URL class to 'multiple files per URL' might get the behaviour you want here, even if it sounds a little wrong. That thing basically says 'get all metadata, keep downloading html and always do download html every time you see it even if you think you've seen it before, just don't download files again'. If you right-click on a file log entry, you should get pic related. In your second attempts, which end up 'already in db', do you have any parsed tags, and if it has a hash, is it the correct hash? If the hash is good and there's content to add, it should all fire correctly on 'already in db', so let me know if it isn't! I'm currently investigating a complicated subsidiary page parser situation where it apparently doesn't, also, hope to have it cleared this week, so check the changelog on next version too! >>14525 Thanks! >>14529 >>14530 Yeah, I'd recommend setting a favourite search with a template. Have that negated tag and set it as 'search paused' (i.e. click on 'searching immediately' in your template to save), and then load that or other templates up. I think Ctrl+I flips 'searching immediately', if you can get used to it. Then again, if WIP specifically is so important to you, you might like to also play with multiple local file services. If you made a new file service called 'complete' or something and then mass-copied the non-WIP files to it, then you could just search that file service when you wanted it, or you could have other variants like expelling all the WIPs to their own box, whatever worked best for your situation. Multiple local file services are often used to make strict sfw/nsfw barriers, and this isn't completely different.
>>14547 Yeah, I'd like this. I want some of my personal unnamespaced tags to stand out in gold lettering or something. I could just namespace them all, but bleh, and I might like just some normal kino tags on any service to stand out in bold or something anyway. I want more tag display rules. I don't honestly have excellent tag-service-separation tech yet. I was talking with someone about this the other day--when I did multiple local file services, we got a load of individual-file-service tech on the database end and custom unions and so on when doing searches, but tags are still stuck on either one particular service or 'all known tags', which is a clever optimised hardcoded union of everything. It'd be nice to have some more settings here, although there would be some fun stuff to deal with when two services had the same tag but different display rules. So I know better where you are coming from, what sort of display are you thinking? Showing the text a different way, grouping different services' tags separately, or something else? >>14553 Yep thanks, I rolled in some new tags, from here >>14386 , in v560, and I will try and roll in that description note parsing stuff in for next Wednesday's release, v561. >>14556 Thanks, I have had a couple of reports. I hate it when this happens, and there isn't a clever way around the bitrot. I'll have another look to check, but I think the critical points are v448 (can technically update to v559, but misses some downloader updates), and v551 (can safely update to v559), so I'll write some version checks and tell the user if they need to do another step, and to what version. >>14557 We are in slight limbo with Pixiv Ugoiras specifically. The thing is that Pixiv offers the frame timings of their Ugoiras in javascript or JSON or something separate to the images/zip, and if we are going to have proper native rendering, it would be nice to record those frame timings when we finally set the Pixiv Ugoira floodgates open. Another user has been playing with some extra download tech to figure this out, but I am not sure what the status is. I'll have to do some database tech at the same time so we can save it, too, and we'll probably break the Ugoira standard and save the frame timings in the zip as a .json, too, since it is so weird to have them so virtually pseudo-sidecar'd. Anyway: we want it, and we want Ugoiras to play properly, but we aren't there yet.
(74.62 KB 1304x1115 hydrus_client_yamFwO2ksI.png)

>>14550 ok I made sure everything is unpaused just in case there was a weird pause I missed, got the same error. the problem at least as far as I can tell is that network sync never resets, I can do everything it asks me for, the program knows those files are gone, or at least should know, but for some reason it never registers that they are gone or it never tells network sync they are gone. I have tried reset downloading hoping that would make it forget that anything was downloading, but that didn't work either. so I have no clue what's not doing whatever its suppose to do correctly.
>>14560 For my tag usecase I've manually tagged a bunch of files with wide genre tags alongside a score tag. With other tag repositories and ai tagging looking like it can streamline the wide genre tagging + more specific tagging I'd like there to be a distinction visually between them. Maybe it's literally having the option to have a tag repository effectively be a namespace, with or without a prefix but able to have it's colour/boldness etc changed. Ideally with the ability to have an images tag list sorted first by tag repository, then alphabetically within that repository so my tags aren't mixed in with the 1000 extra tags added by other repositories. This would also apply to search suggestions if possible, it's a lot easier to search through my limited tag set first than having it mixed in with the thousands added by external repos. For the problems you'll run into yeah there's a lot of decisions to be made. If a person elects to have individual repositories act as their own namespaces that would at least fix the multiple services using the same tag issue since the tag would simply appear multiple times in the list, one per service (throwing shit at the wall but maybe you can set a service priority where the highest priority service with the tag is the one that displays it, with an asterisk next to the tag to indicate other services also have that tag).
>>14559 >If you really are getting bandwidth override on file downloads and don't have that hacky 5-second override set, let me know, since in that case something is definitely screwing up. Yeah. It's saying "overriding bandwidth after 5 seconds" but the override checkmark is unchecked. It's happening for the file downloads, not the searching step.
I had a mixed week. I added thumbnail rearrangement, and I cleared some bugs and quality of life stuff, but I didn't get as much as I wanted done. The release should be as normal tomorrow.
>>14566 >I added thumbnail rearrangement noice
I'm a casual user getting frustrated with all the fiddling with 403's and cookie/header stuff. What's the best couple of sites with lots of content and little to no upkeep to scrape from? I was using lolibooru.moe & sankaku, then switched over to rule34.paheal and danbooru+gelbooru. I looked into hydownloader, but that looks like a high maintenance program to use, so it doesn't seem feasible for really reducing time spent fiddling with stuff.
>>14568 I mostly grab from pixiv.net, tbib.org, and e621.net. Only issue I ever have is the occasional (like once a month) sending cookies from e621 to hydrus to stay logged in, but that's literally 2 clicks two fix with hydrus companion. Cannot recommend hydrus companion enough, it makes so many things much simpler. >lolibooru.moe pixiv.net would be the place to go for that sort of thing. You'll need an account and to manually set your location to Japan in your profile, then send cookies to Hydrus. Again, super easy to send cookies with hydrus companion. The only issue with pixiv is that they made their own animation format called "ugoira" that hydrus can't handle yet. It'll probably happen one day, but until then I've been using the Pixiv Toolkit browser extension to convert them to webm in the mean time. Extension is at: https://github.com/leoding86/webextension-pixiv-toolkit
>>14569 >convert them to webm That's a lossy conversion I believe. If you want something lossless, I think it also lets you convert to APNG which is close to what ugoira actually is.
>>14570 It can do GIF, APNG, or WEBM. I believe you are correct that WEMB is lossy but I've never been able to discern a difference to my eyes with it set the best quality though. Gotta manually download, convert, import and tag everything anyways, may as well pick the space saving option while I'm at it since I can't visually see any difference between the APNG and WEBM anyway. Though if Hydrus could automagically do all that for me, I wouldn't care what format or how much space it used. Convenience would trump the space saving for sure. Maybe one day.
https://www.youtube.com/watch?v=GuVnZZ1sFIc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v561/Hydrus.Network.561.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v561/Hydrus.Network.561.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v561/Hydrus.Network.561.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v561/Hydrus.Network.561.-.Linux.-.Executable.tar.zst I had a mixed week. Thumbnail rearranging is added, and some bugs and quality of life issues are cleared. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html thumbnail rearranging So, if you right-click any thumbnail, you'll now see a 'move' menu. This makes it simple to rearrange your thumbnails. You can move your current selection to the start, the end, left one, right one, or 'to here' if you have multiple selected. If your multiple selection is non-contiguous, it will be made so on move, with the move focusing around the position of the first selected item. You can also map these commands to keyboard shortcuts under the 'thumbnails' shortcuts set. I haven't added any default shortcuts for this yet, but let me know if and what you would prefer--I've been playing around with ctrl+numpad numbers on my dev machine, and it feels nice to me. In future I'll try and figure out mouse drag-and-drop rearranging. It might have to wait for a larger pending rewrite of the thumbnail grid though--we'll see. other highlights Unfortunately, there were a couple of stupid typos in the content processing changes last week. One in the duplicate filter (which the v560a hotfix addressed), and another fixed today that was causing 'already in db' results to not get metadata updates correctly. Sorry for the trouble, and thank you to the users who reported these. Ctrl+C/Ctrl+Insert is now hardcoded to copy tags from the taglist. The thumbnail and media viewer menus should now be much thinner. I hate how wide they can get, how annoying it is to hit their many nested submenus when they get like that, so let me know if they still go crazy in some situations. There was a bitrot issue in v559, the millisecond timestamps conversion, that made it impossible/bad to update from a much older version. This has always been a tricky technical issue to talk about and communicate to the user, so I've now written a better in-client error reporting process that stops the user before the update is even attempted. The upshot this week is that if your client is v551 or older and you try to update to v561 or later, you will be told to update to v558 first. In lieu of a proper rewrite, I've made some semi-hacky newline processing improvements to the parsing system. If you make downloaders and hate having to deal with extra whitespace in multiline content, notes or otherwise, check out the full changelog and let me know how you get on with it all. next week I want to work on github bug reports, which I haven't put proper time into for too long!
>>14572 >Ctrl+C/Ctrl+Insert is now hardcoded to copy tags from the taglist. Noice.
>>14462 just saw this post. seconding >>14466 they both have different stuff
>>14572 The Ctrl+C and thinner context menus are some of the coolest changes to me in a long while, great! Also just curious, what's the use case for thumbnail rearrangement?
"The 403 error has recently returned to me. And I found a new solution on the Danbooru forum. Instead of your user-agent you have to write your nickname (Danbooru) in hydrus. It worked for me, maybe someone can use it too." This is in regards to fix 403 errors. Any clue wtf he means by putting your nickname in hydrus?
>>14575 >what's the use case for thumbnail rearrangement? When you imported a set of images and you need to order them, manually putting them in order before actually labeling them is probably much easier. I know I often want to just see a small set in order that's out of order, but changing the sort order to make it work is often a bit cumbersome.
>>14575 >>14578 Oh, I didn't read the update right. This is moving thumbs either forward 1, backward 1, to the end, or to the front, via right click menus. I thought is was the more intuitive drag and drop rearrange being implemented. In this case, I can understand you seeing little use cases for this. Really, if I need to temporarily see something grouped together in order, I can just drag and drop each file in order to a new page and it would be faster and more intuitive. This is really only useful if you only need to move one or two files, and those files just happen to be one or two spaces away from where you want them or need to be moved to the front or back of the list.
Can I ask if there are any plans to hook subscriptions and import folders up to the API? Designing the right API for subscriptions might be difficult but it's mostly import folders I care about. These days I am running Hydrus headless like this: env QT_QPA_PLATFORM=offscreen hydrus-client --boot_debug I mostly interact with https://hydrus.app and rarely touch the GUI except for subscriptions and import folders. Subscriptions I don't mind using the GUI for but sometimes I want to reload all watched folders (for example gallery-dl has just finished downloading a bunch of files, I'd like to tell Hydrus about this rather than wait for the timer to trigger). It'd be nice to have an API call for that or maybe attach a signal handler (e.g pkill -SIGUSR1 or -SIGUSR2 could trigger a reload). Another option could be hooking up inotify into Hydrus instead of polling but that introduces more complexity compared to a simple curl call to the API at the end of my script.
Exporting a file fails with "Filename too long", so {tags} is not usable in the template.
(6.56 KB 466x226 Image.PNG)

>>14577 You can change headers such as User-Agent under network > data > manage http headers.
>>14579 To be honest I was thinking the idea was the rearrangement was saved, which would be super handy to me. I understand why that's not the case (i'm sure this would be difficult to produce given the variable things one can sort by) but as it is I'm not thinking I'll get much use out of it, I rearrange images in chronological order primarily and would rather just edit the times so they are arranged as such permanently. Still neat though.
artstation gallery searches always 403 now, even when given cookies. What do?
>>14561 Sorry for the continued trouble. I can verify that file is a legit PTR update file that you should have, so please try this: - make sure _help->advanced mode_ is on - open a new search page, change 'my files' to 'repository updates' - paste in "system:hash = 12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009" - if it does not appear, change the file domain from 'repository updates' to 'all local files'. Then, try 'all deleted files'. Does the update file appear? Presumably it won't, since you don't have it, but let's see. Then let's fix another common error here, hit database->regenerate->local hash cache. Then, to test another issue, let's hit up database->file maintenance->manage scheduled jobs->add new work tab and click the 'all repository update files', and add the top 'if file is missing, then if has URL try to redownload, else remove record' job. Go to the 'scheduled work' tab and force it all to clear again. Then hit up database->db maintenance->clear/fix orphan file records. Then try unpausing the repo again. Let's see if we get the same error or what. >>14562 Thanks, these are good ideas. I already have some options structure here, and some service precedence, in the options in tags->tag display and search and the sibling/parent stuff. I wonder if I should move the namespace colour options there? The current options would be 'all known tags' as normal, but then each service could override it. I'll have a think. >>14565 Damn, thank you. I will look at this. I guess restarting the client doesn't fix it? And is it for one page or all download pages, new and old? And stupid thought, but does turning that 'cog' button menu option ON actually turn the behaviour OFF? I wonder if the initialisation is flipped somehow. >>14568 The big three boorus--danbooru, gelbooru, and safebooru--are good for run of the mill animeshit. Much of western-friendly pixiv is reposted there. Rule34.xxx hosts a lot of western content and SFM stuff. e621 has beast/furry stuff if that's what you like. All are easily accessible to a guest user (although danbooru got stricter recently). If you want more spicy content, there's very few places that'll host it without a login these days. Gelbooru allows spicy stuff just with a one-time cookie, I think. I haven't done it myself, but I think it should be simple: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Gelbooru
>>14570 >>14571 We hope to have native playing of Ugoiras this year, just need to figure out some frame timing parsing stuff into the download pipeline and database storage, and then I'd be interested in figuring out the first in-client file format conversion. APNG and Webm are good candidates for different technical reasons. >>14575 >>14578 >>14579 >>14584 >Also just curious, what's the use case for thumbnail rearrangement? I recently updated the 'manage times' dialog to handle multiple files at once, and one feature is it can apply a 'cascading' time to a set of files, so file 1 in the selection gets 'blahblah and 00 seconds', the next 'blahblah and 01 seconds', then 'blahblah and 02 seconds', and so on. I have also, for ten years or more, wanted to add the same thing to the manage tags dialog, so you can spam 'page 17 -> page 54' to a bunch of files in one operation. This new tech will benefit from better ways of reorganising files before we set up the job. I'd like to figure reorganisation via mouse drag and drop too. The thumbgrid is all my garbage custom code from a million years ago and coordinateshit can sometimes be a pain. We'll see! >>14581 Yes, absolutely! I need to do some backend stuff to make subscriptions more instantly accessible (think about the delay before manage subscriptions opens), and some more 'this is what this sub is and what it has done and does do' tech, but yeah, lots of people want access so I won't forget this. Import folders is less desired, but as I add subs, I think the access calls will be similar, so I think it should be doable. If I forget, please remind me! >>14582 Thank you, can you give more of an example, so I can reproduce it on my end? I thought I had pretty good filename 'eliding' tech now, where it should clip the filename to something that'll fit for your file system, no matter what. Do you actually get the 'filename too long' error? Can you post it? What list of tags cause it? >>14585 Yeah I don't think it is reasonably possible any more. I think it is a strict CloudFlare block that we just won't pass. Maybe in the future when I move to HTTP 2.0. Maybe gallery-dl/hydownloader can handle it?
>>14587 >I recently updated the 'manage times' dialog to handle multiple files at once, and one feature is it can apply a 'cascading' time to a set of files, so file 1 in the selection gets 'blahblah and 00 seconds', the next 'blahblah and 01 seconds', then 'blahblah and 02 seconds', and so on. Ah, that would be very useful.
(294.34 KB 898x898 yes applejack.jpg)

>>14572 >thumbnail rearranging YAY!
>>14587 >>>14582 https://twibooru.org/2951748 {hash} {tags} A sidecar file didn't get saved. Most of the tags are in two services, but there is one that stars with "downloadreason:sub:mares definitely" in a separate one, that got cut off. Traceback (most recent call last): File "/mnt/hydrus-561/hydrus/client/gui/exporting/ClientGUIExport.py", line 851, in do_it metadata_router.Work( media.GetMediaResult(), dest_path ) File "/mnt/hydrus-561/hydrus/client/metadata/ClientMetadataMigration.py", line 193, in Work self._exporter.Export( file_path, rows ) File "/mnt/hydrus-561/hydrus/client/metadata/ClientMetadataMigrationExporters.py", line 657, in Export with open( path, 'w', encoding = 'utf-8' ) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 36] File name too long: '/tmp/3a94a9abc717a496ffe22eb1d7e4741ce184591bcdf76105428fdf49408e89b3 best friends until the end of time, applejack, fluttershy, pinkie pie, rainbow dash, rarity, spike, twilight sparkle, twilight sparkle (alicorn), derpibooru import, sub:mares .png.all tags.txt'
>>14583 Oh sorry, I understood that part. I meant I was confused by what he meant by "write your nickname (Danbooru) in hydrus." What nickname?
Copy pasting timestamps in the manage times menu seems to be broken. I simply click on the copy button and then the paste button and get this error.
(208.61 KB 611x1292 A.png)

Hi @Hydev, i got some questions. Hope you can answer them. im still testing with a very small number of files to get the grips, so maybe the answer wouldn't be clear to me for some questions. 1. im up to date with the PTR. i exported the repository update files which are around 11GB, just to have them backed up in case. is it correct that people can use those files to update their PTR tags to the latest state, means all the mappings/parents/siblings are contained in those 11GB? if that is the case then: a) why was it that those files were not offered compared to a whole client with 30GB+ that was linked in your documentation? is it because they have to be processed first? b) there is the import repository update files option. can i import those in an offline client or do those files need an online connection with public account to work at all? didn't try yet. if you don't need an online connection, i guess it would be feasable to just copy the new update files to the offline client and import them, instead of creating huge Hydrus Tag Archive files that take hours to be created for all PTR mappings. i tested to create a HTA with all the mappings from the PTR to see how big it is. after it grew to 30GB after 1-2 hours, i stopped. i assume it would grow to 60GB like the client.mappings.db. c) can i delete the old processed update files from the "repository updates" domain after i am up to date? or are they necessary to stay in the client for the PTR to work? also if i can delete them, i assume i should leave the latest in there in case i haven't processed them yet, correct? 2. i could not find out what the "tags" sorting button (see screenshot No.1) is for. maybe i have not enough own tagged files to see a difference. can you give me examples when it is used? also i only can see which tag domain is selected when moving with the mouse on the button to show the tooltip, because i assume there should be a checkmark icon like for the other buttons but it is not showing here. bug? 3. im not sure if the tags autocomplete suggestions works as intended in the manage tags dialog or when adding favorite tags in "options -> tags dialog" for example. maybe other places too. i mean specifically the file domain button does not change the behaviour and seems to not be necessary at all, because it is not considered. let's say i have only one file in hydrus and it is in the "my files" domain and i tagged it myself with the tag "test1", then changing the tag service to "my tags" and the file domain to "trash" shouldn't show that suggestion because "trash" is empty, but the suggestion does show. let's say that file would have been also tagged by PTR with "cool1" and i change the tag service to "PTR" and the file domain to "my files". instead of showing me only "cool1", the autocomplete shows every PTR tag. no matter which file domain you chose, it either shows all "my tags" or all "PTR" tags, if you understand what i mean. in summary: the option to change the file domain seems useless? is it intended? (see screenshot No.2 -> i have only 13 files in hydrus "my files" and no zelda content at all but i get the suggestions anyway). it looks like the file domain is set to "all known files with tags" all the time, no matter what file domain you chose. if it would work like i thought it would, you could create a "SFW" file domain, select it and then you would only get SFW tag suggestions. Of course only if you put files in there. for the normal search pane it works just as i imagined though! the file domains there are considered. 4. right-click into the "selection tag" window after a search shows the "experimental" option with three different display options. can you explain where they apply, how they differ and if there are settings to change the behaviour. in general: where does it make sense to use those at all? i couldn't find out what they do. it is not the same as the "manage tag display and search" setting afaik, i kinda understand how this one works after playing with it a bit. but i don't get the "experimental" one. 5. feature request: can we have the option to toggle the visibility of the local file domains in the popup that are shown when you left-click on the local file domain button in the search pane (see screenshot No.3)? im in advanced mode so there are some that i find distracting. maybe when you click on the file domain "multiple/deleted locations" and the new window opens, you could add an eye icon next to the file domains to toggle the visibility in the popup as you like. for example, the "all deleted files" location is annoying me right now because it shows files that were put into a new test file domain and then deleted again, even tho they are still in "my files". not useful for me. i know it is intended for this one, as the tooltip for it says. but i rather switch that file domain visibility to a "deleted from all local files/all my files/my files" domain. would be nice if we could customize to show whatever file domain we want in the popup, maybe even rearrange the order. thats all for now. sorry for the wall of text. i appreciate it alot if u try to answer them. you are a TRUE LEGEND!
I had a tepid week. I fixed some bugs and improved some quality of life, that's all! The release should be as normal tomorrow. >>14591 Thanks, should be fixed tomorrow, let me know how it goes! >>14593 Same, thanks, should be fixed tomorrow!
>>14569 do you use the pixiv parser from the cuddlebear repo?
Does anyone know what the "hatate:not found" tag is? I got banned for it but I've never seen it get parsed from anything.
>>14600 Ah, I see, yeah that'd explain why I've never seen it before, I must've gotten caught in the crossfire due to duplicate processing.
>>14601 oh hey, welcome to the duplicate processing ban club! XD hatate:not found has never gotten me, but sankaku urls as tags has plenty of times. >>14598 Yeah, I just use that one and send pages to hydrus via hydrus companion.
https://www.youtube.com/watch?v=XnwHtzBf-c8&t=17048s (go to 4:44:00 for a wild fight!) windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v562/Hydrus.Network.562.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v562/Hydrus.Network.562.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v562/Hydrus.Network.562.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v562/Hydrus.Network.562.-.Linux.-.Executable.tar.zst I had a tepid week, but there's some decent fixes and quality of life improvements. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html all misc this week I fixed a stupid typo error in the manage times dialog when you go into a single time and try to copy/paste the timestamp. The buttons also add millisecond data now. Fingers crossed, drag and drops of thumbnails and page tabs will feel snappier this week. You might see some heavy 'analyze' database maintenance coming down the pipe. Let me know if it proves annoying, or if you even notice it at all--my hope is to iron out some super slow PTR-based tag updates that hit some users, but we'll see how it goes. If you are an AUR user or otherwise updated to a very new Qt version (6.6.1) and suddenly the column widths of multi-column lists all went ~100px wide, I think I've fixed it! If you were affected by this, I can't recover your old settings, but recall that you can right-click any list header to reset the widths to the default. next week I was short on work time this week, so I'll try to hack away at github bug reports again.
>>14603 >https://www.youtube.com/watch?v=XnwHtzBf-c8&t=17048s <direct linking Dev-anon, you should know better.
>>14602 weird, i'm getting this every time i try to add the pixiv tag parser from github despite having deleted all pixiv stuff in my hydrus client and restarting it "All 1 downloader objects in that package appeared to already be in the client, so nothing need be added."
>>14605 i'm referring to the "pixiv tag search api parser 2020-10-16.png" on cuddlebear btw
(42.23 KB 375x341 15-12:08:43.png)

Forgive me if this is already implemented and I haven't noticed it, but could we get a "favorite gallery downloader setting" or something? I have a few gallery downloaders that I mainly use and many that I use only occasionally and it'd be nice to have the high traffic ones at the top or highlighted in a different color or something.
>>14606 Did some double checking and found this, which i believe I got somewhere in one of these threads about a year ago. Maybe this will work for you?
>>14606 Isn't that one out of date? Try clicking "add defaults" at the bottom of network > downloader components > manage parsers.
I want to host a public booru on my server, anyone got some advice on whether Danbooru or Shimmie2 is better? Sorry for off-topic, but this is the best place I could ask.
>>14610 >Shimmie2 Huh, this is new to me. Are there any examples of boorus that use this?
>>14609 when was the last time the default hydrus pixiv parser was updated?
>>14613 >>14608 ok I successfully imported this and it looks like it's just the parser without a gallery downloader, so this is quickly getting into DIY territory. is there one with a premade gallery downloader and stuff to save me the trouble? also since i'm new to pixiv, will i be able to scrape all the content using this file page parser as compared to the pixiv search API parser that comes with hydrus by default?
>>14614 NGL i know nothing about how downloaders work lol. I have no issues with downloading galleries or pages. Almost everything I get is automatic thru subscriptions to artists and they all work as expected. Sending individual pages works fine too. The only issue I have is that nothing works with the bookmarks pages. The following is like 2 minutes of work a week tops. I only use this for this for thing I bookmark on mobile. Anything on the desktop is just click hydrus companion --> send this tab to hydrus. Works on artist gallery pages or individual posts. That covers like 95% of my pixiv stuff. Below catches the random mobile stuff. Bookmarked items doesn't work for anything but the first page. For bookmarks I use the Link Gopher extension (https://sites.google.com/site/linkgopher/) and have it extract all links on the bookmarks page with the term "art". This gets all the URLs for each item on the page. Paste that into the URL downloader, Then go back to pixiv and move to the next page of bookmarks. Rinse repeat. Takes like 5 seconds to grab an entire page of 50 bookmarks like this, though it's a bit manual. If you don't use pixiv's bookmarks (the little heart icon) then you don't have to deal with that at all. Aside from the bookmarks page not working right, hydrus interacts with pixiv exactly the same as it does for any other site for me. Is there any kind of way to see what versions of a downloader you have? Possible I've got something else from a random post here that's not on cuddlebear. Or a way to just export everything pixiv related so I can post it here? >>14613 assuming the stuff on cuddlebear is "official", then nothing has been updated for pixiv for 4 years. Sometime around January 17, 2024 someone posted the downloader I put in >>14608 on a thread here and I grabbed it. There may be some technical discussion if you wanna go back and try to find it. I just saw pixiv downloader, said "I use pixiv!" and grabbed it. All the network related stuff is way way over my head.
>>14615 Typos, ugh. I meant 2023, not 2024
>>14613 >>14614 >>14615 >>14616 Fuck it, I'll stop being lazy and just find it. Post where I got that parcer is at https://8chan.moe/hydrus/res/18976.html#19058 Not anything technical there. Just an anon asking for it, and a kind anon just posted the parcer.
Can the OR search dialog file domain reflect the file domain of the page it launched from? It always defaults to "my files" no matter what is selected and it's somewhat annoying having to change it every time.
>>14597 >Same, thanks, should be fixed tomorrow! It works now, but there's another issue now and that is if you copy a timestamp and then paste it somewhere else, it won't actually change when you hit the apply button, you have to manually edit it and then it works, but it has to be different than the time you're pasting. If you want to change the value to be the one you're pasting, you have to edit it twice (first to change it to something else > apply, then change it back to what you want > apply).
>>14615 >assuming the stuff on cuddlebear is "official" It's not and plenty of stuff there is very out of date. >>14613 https://github.com/hydrusnetwork/hydrus/tree/master/static/default/parsers The tag search parser was updated 4 years ago, the file page parser was updated 8 months ago. >>14614 What's wrong with the default pixiv search? Are you not seeing NSFW files? Pixiv won't show nsfw files if you're not signed in. Hydrus companion can make it easy to send login cookies to hydrus. >>14617 One of the replies to this post is hydev saying "I'll add this to the defaults" so it's probably no different from the pixiv parser that comes with hydrus.
>>14586 ok, in that search there is only 1 file. now, when I deleted those files, hydrus still went on like normal updating it like normal and getting new ones till something happened and it wanted to see the old ones again, there should probably be more than just one file. ok, I did everything else, there was some weirdness, finding more missing files then it searched, but I did that and got back and no change, then I restarted in case, no change, so I hit go on the first one thats paused, and it came back with this v559, win32, frozen Exception An unusual error has occured during repository processing: a content update file (12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. I recommend you run _database->maintenance->clear/fix orphan file records_ too. Please then permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository. Traceback (most recent call last): File "hydrus\client\ClientFiles.py", line 1599, in GetFilePath def init( self, controller ): File "hydrus\client\ClientFiles.py", line 1044, in _LookForFilePath hydrus.core.HydrusExceptions.FileMissingException: File for 12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009 not found! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientServices.py", line 2195, in _SyncProcessUpdates content_update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes ) File "hydrus\client\ClientFiles.py", line 1603, in GetFilePath self._pubbed_message_about_bad_file_record_delete = False hydrus.core.HydrusExceptions.FileMissingException: No file found at path I:\Hydrus\f12\12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\ClientServices.py", line 2201, in _SyncProcessUpdates raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) ) Exception: An unusual error has occured during repository processing: a content update file (12a87231d24caa8a961cfc14795e2ccab255112be2c04e04ea33d38576860009) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. I recommend you run _database->maintenance->clear/fix orphan file records_ too. Please then permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.
>>14617 ty. is there anything wrong with the default pixiv parsers that come with hydrus? actually, it looks like hydev said he'd as that parser to the defaults for hydrus >>14621 thanks. what's the difference between the tag search parser and the file page parser? also idk if anything was wrong with the original since i haven't used it yet, it's just the guy i was talking to further up said he used the cuddlebear version, so i figured that was the most up-to-date.
>>14623 ok i'm using the default gallery downloader that comes bundled with hydrus called "pixiv tag search." i also imported cookies using the companion. i'm glad i checked out pixiv since it looks like the japanese have a lot of unique stuff. however, i'm noticing pixiv doesn't tag their stuff as well as the boorus do. anyone got any leet tips for blacklisting stuff like SFW, amateur doodles, and such? also i'm noticing hydrus doesn't scrape anything for artist queries even though i can search artists via the search bar on the site.
Is it possible to search deletion records? Apparently some bad urls got added at some point and now HC is highlighting things it shouldn't, which is annoying.
>>14596 Hey, thanks for reaching out. 1) Although the update files offer almost everything, they aren't 100%. There's a tiny extra bit of data you need which is still synced from the server (basically it is the list of these files). If we ever needed to wangle complete offline sync of a repo, I could figure out a technical solution here, but for now these update files and their import/export capability is mostly only useful for fixing weird problems. a) Yeah, some people don't want to wait for all the CPU-intensive processing, so that QuickSync pre-processed client became popular. I always recommend a 'natural' sync tbh--people who do quicksync or forced megafast processing in other ways tend to run into maintenance problems as certain maintenance timers don't have time to kick in. (I tried to fix this a bit last week, if you check the changelog for the analyze stuff, it has been a long time problem). I don't offer the files because the PTR sync already grabs them. That's the best way to import them to your client, and future iterations of the hydrus network protocol may well retroactively change the files, so they aren't super worth backing up. b) You can import them, but you'd need to do one single metadata sync to the server. Import the files to your client, set up some PTR credentials, then put it online and hit 'refresh account' once. The progress bars down below will populate and the client will see it already has all of the update files it needs to catch up to a week ago or whenever your export happened. Yeah don't make an HTA of the PTR I think. Too much data. c) You can delete them, but I think hydrus will just redownload them. They will be needed again if you ever run into a database problem and need to reprocess some data. 2) That thing is pretty experimental, I think it is hidden behind 'advanced mode', right? It changes which tag service the collections math works on. I did it with a user who has a very complicated namespace collection system, but we didn't go beyond this. I'll fix the checkbox thing, thanks. Just leave it on 'all known tags', which is the default, unless you want to go crazy. 3) The file domain filter does change which results you get, and the tag counts should be accurate, and particularly if a tag has 0 count in that file domain it isn't supposed to show up, but there are some hacks that ensure that (tags that match what you typed because of a sibling relationship) will show up despite having zero count. In your pick it is obviously all zelda stuff showing up. I had thought I'd fixed this to work in a less confusing way, but I guess it is still a bit of a mess. Normally these sibling tags get shunted to the bottom of a list, below results with actual count, unless you type enough that they are actually useful suggestions in some contexts. That said, if you really can add an unsiblinged 'test1' to a 'my files' file and then switch to 'trash' and see 'test1' in your results with a count, that is a problem. If you do it, what's the count of the result, (1) or (0) or blank? 'blank' should be correct. 4) Like the tag service collections thing, this was a one-off debug experiment really. I might work on it more as I do some other tag display and search infrastructure. Shortly: 'multiple media views' - the tags you see by default in that list 'display tags' - the files' tags with siblings and parents applied (no 'multiple media views' filtering, if you have tags set to hidden) 'storage tags' - the files' tags without siblings and parents applied, should be basically what you see in 'manage tags' But again, if you are a new user, ignore this. I hide it behind advanced mode because it can be confusing. 5) Yeah I think I agree. This tech is mature enough that we should have some display options and favourite domains. I'll make a job, thanks.
>>14607 Great idea. I'm sure someone has mentioned it before, but I don't remember. I'll see what I can do. >>14612 The Katawa Shojou Mishimmie used to, now looks like a dead subdomain. :( http://shimmie.katawa-shoujo.com/ Afaik it turns up here and there, but it was never that popular. >>14619 Thanks, I'll fix it! >>14620 Thank you for the report, and sorry again! I'll fix it and let's hope I'm not being stupid anywhere else. >>14622 This is a little concerning. It thought all those files were missing, I wonder why. In your second picture, where it basically says everything it found is missing/incorrect, did the 'review services' window immediately say that it had none of its update files, and did you see a popup appear within a few seconds/minutes as it redownloaded everything? As far as I know, if that file maintenance job sees missing files, it'll remove their records, causing the redownload. But why were they marked as missing/incorrect in the first place? Do you remember deleting them manually? Why, I ask myself, is your client losing updates, downloading updates, and then being unable to find them again? I am not sure. I suspect we either have files going missing or something is seriously screwed in the file storage so it doesn't know what it has, or doesn't know how to purge a bad record. This is a stupid thought, but do you have anything like an anti-virus that might be pulling files out of your client file storage? These update files are in your [install_dir/db/client_files] directory normally, they have no extension, just the hash as filename, and are gzipped json. >>14625 When you say 'deletion records', do you mean deleted files? If so, turn on help->advanced mode and then in a normal search page, click the 'my files' button under the place you type search tags and then select 'multiple/deleted locations'. Set the domain to 'deleted from my files' or whatever, and then the search page will work like any other, it'll just deliver a variety of 'virtual' results with weird/no thumbnails. You can edit these deleted files' metadata, so I reckon search up your known urls here and then manually delete them. Let me know if you need a cleverer solution here.
>>14624 Pixiv posts are limited to 10 tags. So don't expect much in the way of tags unless it's something in the PTR someone else has already tagged. The tradeoff is there is a stupid amount of content there that's not available anywhere else. And it's really, really good about recommending new and similar content/artists to you. >>14612 I was able to find a very small shimmie2 booru at https://booru.oke.moe/ after a quick google. there's surely more, but it's the first thing I found.
>>14624 Everything in pixiv is in japanese. Enter the tag you are looking for in English slowly, and you will see suggested japanese tags start showing. After that, you just have to start perusing what they are tied to.
>>14629 A few tags can get weird, like ใ‚ฎใƒฃใ‚ฐ, which is "gag" in Japanese. It seems to have a double meaning in pixiv. On one side, it means gag as in bondage, but in the other, it means comedy manga or doodle. So, would have to make a blacklist that kills any files tagged with manga or doodle, if you just wanted the bondage files. Put the tag into pixiv and start looking the files, and you'll see what I mean.
I have been trying to use the premade downloader for Inkbunny. Current client is 559. It is stuck hitting the homepage and will not log in successfully. Advice?
>>14626 Thanks a ton for all your answers. You pretty much answered everything. 3) Oh you are right. I didn't think of tag counts in this case. I thought that the autocomplete suggestions should only show the tags that are actually in the file domain, just like the normal file search pane. The tags that are not there, wouldn't even appear in the autocomplete. "test1" wouldn't appear there for example in the trash domain. But seems you handle those two autocomplete locations (search pane & migrate tags) differently. So yes, in "migrate tags" it shows "test1" in trash but without a count -> blank. Im completely fine with that. So i know that i could distinguish which tags are actually in the domain by checking if they have a count at all. Good to know! 4) I checked again and understand now, thanks. They might be very useful in future, so dont get rid of them ;) 5) If i may add two other ideas that came into my mind. a) it would be very cool if i could fast forward/backwards in videos with the mouse wheel when being on the timeline with the mouse cursor. In VLC media player for example, i configured the video to jump forward/backwards in 3 second steps. It is super convenient and it would be cool in hydrus too. If the seconds would be configurable even better! b) my default image viewer is irfanview right now and i like the option to rotate the picture in there with the R or L keys. sometimes you may just want to rotate the image. some buttons on the top bar in hydrus would also be fine if shortcuts are not possible. Maybe you like to consider those ideas. I bet the wishlist is huge :P Thanks alot.
>>14624 >>14628 thanks, i've been slowly adding stuff to search for. there's just soooooooooooooooooooo much useless SFW, doodles, and other stuff getting scraped because the pixiv stuff is poorly tagged. even just filtering out all the SFW stuff would be a massive help. also are either of you able to do queries for artists using the default pixiv tag search that comes with hydrus? i've tried several artist queries now and they all return 0 results.
>>14633 looks like i had deleted the "pixiv artist lookup" gug without realizing it (disappointed though that you gotta look it up by an id number rather than the artist name--30423409232 doesn't exactly roll of the tongue) also it seems like adding R-18 to the query is the best way to find nsfw stuff (doesn't work with the artist lookup obviously though). now the question i'm asking myself is whether pixiv is worth scraping from given the increased effort level due to bad tagging. anyone have a workflow/thoughts on this?
>>14634 I would suggest just browsing around and bookmarking stuff you like. It will start suggesting things based on that. Pixiv tags do kinda suck, but there's some AI system behind the scenes that they use for recommendations and it's pretty damn spot on. Their recommendations clearly don't just rely on tags. Find a post you like, scroll down past the comments, and go peruse the recommended works. Bookmark anything you like. Follow artists you like. You can also go to Discovery and well, discover works based on what you've told pixiv you like with works viewed, bookmarks, and followed. Most of the popular tags have english translations, so searching in english is often possible, but using japanese is usually better. There has also been a steady influx of westerners using pixiv as an alternative to places like twitter and deviantart because of pixivs general lack of content restrictions. As long as it abides by Japan's weird censorship laws, there's not a lot of banned content. Some of the most popular pixiv works end up on the various boorus. Check the booru source to look up the artist on pixiv. Follow that artist, in my experience it very rare for all of an artist's pixiv work to end up on boorus. Pixiv is an absolute gold mine of content if you spend the effort. As far as tagging goes, the most popular things might already be in the PTR from getting tagged by some booru already. Hydrus grabs the post tags fine for whatever they may be worth lol. My workflow is to do a quick pass to identify artists and characters that were not properly tagged and the most basic tags for me. Things like colors, positions, fetishes, obvious locations (like "bedroom" or "beach"). Then I export everything into Waifu Diffusion and let it do the fine detailed tagging and re-import everything with the AI tag sidecars. Tags are sent to a separate "AI Tags" services to keep them separate from PTR and my private tags. I also add a tag "meta:ai tagged" for good measure. This keeps things searchable but makes sure nothing accidentally bleeds over into the world of human tags. AI tagging isn't perfect yet, but seriously, it's like 85-90% there. With a weight threshold of 0.35 there are very few errors. It's so much faster to AI tag everything and then do a sanity check for anything wrong. Once I review the AI tags are all correct, I copy them to the PTR and add any tags that the AI missed.
If I want to start subscribing to pixiv and and kemono party, I need to give cookies to Hydrus companion. But the whole point of subs is that it's automatic. Do I need to go open pages regularly on sites requiring cookies in order for the subscriptions to work? Or is Hydrus Companion only really good for one time downloaders?
>>14637 Once you have hydrus companion setup and working with the Hydrus API, all you have to do is click Hydrus Companion --> Send cookies from this tab to hydrus. That's it. You may have to do that again occasionally but it's very rare. LIke once every few months. If there's ever any issues the subscription log will let you know and you can easily resend cookies and retry errored downloads.
>>14638 Thanks. I wasn't sure exactly how long cookies would last. Now to make a few dozen more subs for every artist I can find.
I'm trying to download via url download a bunch of 8moe files, I'm using direct links, for example: >https://8chan.moe/.media/8filename.jpg But it seems that the site content waning is popping up for hydrus when it tries to access the urls. I tried importing my session cookies but it didn't work. Any ideas?
>>14640 I have Hydrus Companion now, but I have no idea how to make my login cookies apply to my downloaders/subscriptions so they can get the R-18 posts. I've both sent the gallery to Hydrus using Hydrus Companion as a one time downloader and made an artist subscription in Hydrus using the pixiv artist ID, after having "Successfully sent cookies to Hydrus" while logged into Pixiv using Hydrus Companion, and both the one time downloader and the subscription skip over R-18 files.
>>14635 >AI tagging isn't perfect yet, but seriously, it's like 85-90% there. With a weight threshold of 0.35 there are very few errors. It's so much faster to AI tag everything and then do a sanity check for anything wrong. Once I review the AI tags are all correct, I copy them to the PTR and add any tags that the AI missed. automation is where it's at, thanks for this recommendation. i won't be browsing anything manually via the website though, that's just so slow and inefficient, especially when there's an ocean of stuff out there that i'll never even see as is. >>14637 as a fellow noob, what i'm gonna do is bookmark all of the porn sites to a bookmarks folder, and then open them up once a week to let hydrus companion automatically send the updated cookies to hydrus (there's an experimental feature in the hydrus companion options menu to send cookies automatically) >>14642 make sure you delete the login script and cookies for those sites first. also make sure you've set your region to japan on pixiv, and also make sure you're actually able to view the r-18 stuff on the site itself. you can append R-18 to your pixiv tag search downloader to make sure you're only getting nsfw stuff
>>14643 >make sure you delete the login script and cookies for those sites first. I'm running around in the dark here and haven't really even messed with cookies or html in something near a decade. I can find the cookies, but I have no idea which need to be deleted and which need to stay as I don't know which is the "logged in" cookie. It may be the one that's set to expire with my current browser session? But there's one each of that on both www.pixiv.net and imp.pixiv.net. I have no idea where or how to delete the login script. I went ahead and set my region to Japan, which I thought I already did, but it was blank when I checked. However the missed files are those I could see without changing my region setting in the first place.
>>14633 There really is no "safe" tag in Pixiv. Everythng from "safe for children" to "downright pornographic" is all mixed in together. Even pics from a single artist can really span the range of safe levels. You just have to dig. Most of the stuff your wanting is tagged in japanese. If you find something you like, search for the japanese tags in the tag list, and then look for more japanese tags in the search results. That's how I've been finding stuff. The English tags don't lead to much.
>>14645 Pixiv has a TON of stuff. Out of the 2.7 million files are so I have downloaded, about 1.6 million are from Pixiv. Sankaku has a lot too, but it is really hard to get, and you have to know how to do it all manually with Hydrus, which makes it kind of semi-automatic. Rule34.xxx is surprisingly good. Xbooru pretty much has nothing. Gelbooru is OK.
>>14631 Same guy here, but I think I found the issue. I just don't know how to resolve it. Outstanding network job displays an incorrect https. I don't know how to modify it to be correct. Advice?
>>14635 do you have any thoughts on using hydrus-dd for autotagging? also i haven't touched any diffusion stuff in a long ass time, so do you just use waifu diffusion inside a1111 to autotag stuff? i can't seem to recall which tab in a1111 allows you to add tags to pre-existing images also how exactly do you import stuff that waifu diffusion tagged back into hydrus?
(155.90 KB 1548x859 screenshot.png)

>>14648 I am using this: https://github.com/picobyte/stable-diffusion-webui-wd14-tagger Adds a new tag to Automatic1111. Can tag either single images or an entire directory and export to .txt sidecars. Pic related. You'll need to grab the models as well as the above extension, but it has links to them. If you want the e621 model but don't wanna go thru stupid discord, the actual link to the model is at https://pixeldrain.com/u/iNMyyi2w I've had fantastic results with the WD14 moat tagger v2 and Z3D-E621-Convnext. The workflow is pretty straightforward. Put stuff into to a folder. Point the tagger to that directory. Make sure "Save to tags files" is checked and let it work it's magic.Then just import that folder into Hydrus and have it use the sidecar .txt files. Highly recommend adding a new tag service in hydrus though to keep them separate. See services-->manage services-->add-->local tag service.
I have images with filename and account tags, I want to retag them so the account prefixes the filename for example, >filename:2024-02-19_6 >twitter:sachiko would become >filename:sachiko-2024-02-19_6 is this possible? am I missing something obvious? >>14603 >column widths fantastic, thank you. that's been bothering me for a long time but I never got around to asking about it
>>14649 Too bad it doesn't add ratings (general/sensitive/questionable/explicit) to the txt files. Though there's an option that shits out a json of all the the tags with weights, which also includes the ratings. I tried making a script some months ago that imports the tags from the txt files with the ratings in the json straight into hydrus if your filenames are hashes. Not sure if it still works though, but I don't think there were any major changes to the extension to break anything. https://files.catbox.moe/luvqjr.py There's also a standalone that doesn't need the webui and supports hydrus, but I think you need to install the cuda kit for it to work with gpu. https://github.com/Garbevoir/wd-e621-hydrus-tagger
>>14652 Oh yeah, I forgot to mention that importing through the script using hashes as filenames is a lot faster than going through normal hydrus import, since it doesn't need to calculate hashes for every file.
>>14650 >Export your files with "[twitter]-[filename]" filename pattern >Delete old filename tags >Reimport files back with new filenames as tags Requires some extra free space and import can take a long time if you have a lot of files, but it's the only way I can think of without using the api. There's of course also the api, but I'm too dumb for that.
>>14652 Yeah, I'm using a slightly older version than Garbevoir's that got taken down off of Github, but it works great. Ai tagging is definitely the future of tagging.
>>14649 Is there a way to get it to focus on a specific tag like human/humanoid? I only want stuff with humans in the image so I can easily filter and get rid of furry stuff.
I had a good week. I fixed some bugs and made some shortcuts a bit nicer to deal with, and there are a bunch of macOS improvements. The release should be as normal tomorrow.
>>14662 Thank you for all your work. Hydrus has changed my life. Maybe one day I'll be able to fully trust AI tagging and stop manually tagging thousands of files for years on end.
>>14661 There's a filter where you can exclude tags, not sure if there's one that adds only specified tags. Just have it tag everything and filter it in hydrus.
>>14661 What >>14665 said. AI tag it, shove it in Hydrus. Search for some common furry tags like "furry", and "anthro*". Then manually delete the stuff you don't want either from thumbnails or the archive/delete filter. Or I suppose you could just shotgun it and just blindly delete anything that gets detected as furry by the AI. Depends on your level of fault tolerance.
>>14667 Just did the "delete all furry crap". I discovered the Ai tags it as "no humans". I had 63000 "no humans" garbage filling my collection. Everything from pictures of houses and fruit to furries, all of it not tagged as such by the booru's. I love the Ai tagger. It's not 100%, but it's close to it.
>>14665 >not sure if there's one that adds only specified tags. Whitelist.
>>14669 What this does is override the blacklist for any tag in it. I'm sure it can do other things as well. You just have to look at the logic of using it combined with the blacklist.
https://www.youtube.com/watch?v=XXlzWhBvYwg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v563/Hydrus.Network.563.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v563/Hydrus.Network.563.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v563/Hydrus.Network.563.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v563/Hydrus.Network.563.-.Linux.-.Executable.tar.zst I had a good week. There's a mix of small fixes and improvements. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html macOS A user has improved the macOS release in many ways, mostly in brushing up the App to normal macOS standards. The menubar should now plug into native global bar, with some standardised command labels. The program icon is better, some colours should be improved, dialog menus are no longer a crazy hack by default, and the system tray icon is turned on. In a side thing, I also added Cmd+W to close pretty much any dialog or non-main-gui window in macOS, just like hitting Escape. I think we can expect more work here, so let us know how you get on! all misc In more shortcut news, I've added Alt+Home/End/Left/Right to the 'thumbnails' defaults to perform the new thumbnail rearranging. Existing users will also get these! (assuming it doesn't conflict with something you already have set) Also, the shortcut system now, by default, converts the 'numpad' variants of non-number key presses (think 'numpad Home' or 'numpad Return') into the non-numpad 'normal' ones. You now only need to map 'Left' or 'Delete' once when setting a shortcut, and, no matter how crazy your keyboard's internal mapping is, it should just work. If you do need to differentiate between the numpad and normal variants of these keys, you can turn this new behaviour off under options->shortcuts. The menu on a 'file log' button now lets you delete all the remaining 'unknown' items or set them all to skipped. I fixed another damn problem with copy/pasting timestamps into the manage timestamps dialog. When you paste a timestamp, it should 'stick' better now! If you have had problems with mpv sometimes going silent on 'every other video' or had your windows being rescued from 'off screen' even though they were supposed to appear just on some monitor, check the changelog for some special new BUGFIX options. next week I want to put some work into system predicates, specifically startng with 'system:duration'. The aim is to eventually get rid of all the +/-15% '~=' stuff and replace it with actual customisable values, along with better behind the scenes storage of that data. We'll see how it goes!
(4.24 KB 512x116 sankaku idol file page.png)

I noticed that my idolcomplex downloader wasn't working, I updated it and it seems to be alright now.
>Could not find a file or post URL to download This is the note I see on URLs in a TBIB gallery downloader > file log > show file log. There's no 403 code or anything detailed, it just says it's ignored. Any way to fix this?
>>14677 so I followed the URL's, and it redirects me to a gallery page rather than a post, implying that the post does not exist. the question then becomes why is TBIB gallery downloader fetching URL's for posts that don't exist?
>>14678 fyi, this isn't happening for all TBIB, just 90%+ of them.
(50.88 KB 413x161 23-00:48:03.png)

>>14677 >>14678 >>14679 I think there's a bug with TBIB, if you click the first image result under the search query "blush" (picrelated), it should link to: https://tbib.org/index.php?page=post&s=view&id=13851516 but instead it redirects me to: https://tbib.org/index.php?page=post&s=list&tags=all for some reason. I'm not going to click 500 images to get a proper analysis, but a bunch seem to have issues like that.
>>14680 oh wow nice find. i guess i'll have to stop using tbib then
are wildcard tags supposed to have an implicit * at the beginning? for example, searching "hair*" returns "hairband", as expected, but also "red hair" or "filename:16*" returns "filename:2022-01-16_04-33-28" it's not really a problem, you can use ^ to denote the beginning of the tag. I've just never noticed before >>14654 thanks, it was a bit painful but I managed to do it through the API
(223.93 KB 2467x768 2024-02-22-22-45-784_%pn.png)

(383.78 KB 2464x768 2024-02-22-22-46-386_%pn.png)

Does someone know how to force Hydrus window to appear in the main monitor when you are using two monitors. I recently downloaded Hydrus (Hydrus.Network.563.-.Windows.-.Installer.exe) and after the installation the main windows always appears in my second monitor (left) and not the main one (right) even when I move the Hydrus window I still have this problem like the one you can see on pics related, were the rest of the dialogue text and windows still appear on the second monitor instead of the main one.
>>14683 It ca be finicky. I have this issue myself. I can't recall the exact process, but involved, >Changing the main monitor in Wangblows settings >Unmaximizing the window >Dragging it to the preferred window without maximizing >Dragging it to the top of the monitor to auto maximize >Repeated testing by minimizing and re-maximizing the window see if it took Furthermore, due to some QT issues I can't get popup windows to appear on the correct monitor either, my lower one, and they would always snap back to the top regardless of saved location settings. But I could rig it up so that 98% of the tags management window, save for the title bar, is on my bottom monitor. I have to open the tags management window, inch it down, close it to save it's position, and repeat. Moving it too far at once will cause it to reset. However is does still move of it's own accord slowly to the left, which is frustrating, but I can move it back to the right all at once without issue. Hydev says I can resolve these issues with some experimental QT stuff I could fiddle with, but it's usable right now and I'd rather not break it further. I'm fairly certain this issue will be resolved when I get a real computer instead of a craptop so that my monitors are identical.
>>14680 I'm getting this on some subscriptions and in browser too the last couple days. I guess tbib must have broken something pretty good. Seems to happen randomly, but seems most common with any wildcard searches. Sometimes you get the same 3-4 results over and over and over for dozens of pages before it fixes itself and starts displaying proper pages.
>>14652 what do you mean by "importing through the script"?
>>14689 Nevermind, ignore this. What alternatives are there to using the a1111 tagger extension? The lack of rating tags for images is driving me nuts.
>>14689 >what do you mean by "importing through the script"? I mean using the script in the catbox link I posted. Basically you're supposed to use it like this: >in webui settings, under "Tagger", find the "Store images in database" option and turn it on This should add image paths in the db.json that gets generated when you run the tagger, which the script uses for assigning ratings to the correct file. I think the json should be generated by default, but in case it doesn't, try looking for an option that turns it on. >drag and drop your files from hydrus into a folder, this should make the files have hashes as filenames >run the tagger on this folder, txt files will be created for each file along with a db.json that has a list of all tags and scores and file paths >open command line in this folder and run the script from it (don't forget to use the -k and -s arguments for hydrus api key and tag service name) The script will then look in both the txt files and the json and sends the tags and ratings from them straight to hydrus using the filenames as a sort of an "address" to the correct file in the hydrus database. Doing it this way is also a lot faster than importing normally, because when you import normally, hydrus will calculate hashes for every file so it knows, if the file is already in the db or not, which can take a while. You're basically skipping that part. >>14690 The link at the bottom of that post is pretty much the only alternative I know of.
>>14690 Do you use the PTR at all? If so, it has many tags that have ratings paired (e.g. anus --> rating:explicit). If so, you can migrate the PTR mappings/parents/siblings to wherever you're sending your AI tags. This way you can get at least some of the ratings automatically added. See Tags --> Migrate Tags. Set the source as the PTR and destination as your AI tag service. The PTR has a ton of parent/sibling associations, many of which pair with rating:explicit. This can help reduce the workload there. Rating:safe and rating:questionable are pretty subjective, so I don't think there's many useful siblings for those. If you don't use the PTR, or if the PTR doesn't have enough siblings for you, you can always make your own.
>>14632 >it would be very cool if i could fast forward/backwards in videos with the mouse wheel when being on the timeline with the mouse cursor Amazing idea! >my default image viewer is irfanview right now and i like the option to rotate the picture in there with the R or L keys. sometimes you may just want to rotate the image. some buttons on the top bar in hydrus would also be fine if shortcuts are not possible. Yeah, I think I am about ready to do this sort of thing. The media viewer code has been crazy in eight different ways until very recently, but there are definitely times you just want to do this for whatever reason. >I bet the wishlist is huge Yeah, it used to be several thousand items, and then I stopped counting, ha ha ha. I am always drowning in work, always accepting more to-do items than I can clear in a week, so I have generally fallen into a marathon of 'just get some decent things done every week', and when things aren't so crazy, I can plan out a multi-week or multi-month project. Although it is a problem, it is a nice one to have. Btw, if I say I'll do something that you care about and it doesn't happen in a few releases, please do remind me. I'm culling what I can achieve every tuesday night, and many things just slip to the eternal back burner. >>14641 iirc 8chan needs to referrer URLs in order to work. As long as there is a referral URL, and that URL is 8chan.moe (just as if you had clicked in your browser), you skip the click-gate. It stops direct linking of spicy boards I think. My internal network engine handles this automatically these days with the particular sort of 'API' downloader we use in the 8chan watcher, but if you are downloading URLs direct, I think you might need to hack it. Maybe you can change the 'url class' under network->downloader components->url classes to always force a certain fixed URL class? Not sure if it'll work. If you can, I'd say just try to watch the respective thread URLs in the normal watcher, I think it'll work. >>14663 I'm glad you like my program! Changed my life too, lol. I'm really looking forward to the next ten years. I think teaching a model to recognise 'hair_ribbon' is only the tip of the iceburg here, and, if I can, I am determined to do my part in propping up the open source/unshackled side of AI.
>>14677 >>14678 >>14679 >>14680 >>14681 >>14688 Thank you for this report. I can't promise anything, but since I roll TBIB into the hydrus defaults, I'll see if I can fix this. This stuff happens every now and then. Sometimes stuff like if the booru inserts an advert in the place of a normal thumb, or a 'buy premium to see this' swap-in, that sort of thing. Just the parser sperging out over a change in html, usually. I'll check it out. >>14682 It basically follows these rules: https://www.sqlite.org/fts3.html but there is additional bullshit I've added. It essentially has 'start of string or a whitespace' at the start, so 'hair*' will match your 'red hair' but not 'redhair', with the additional caveat that in the search code I collapse all weird punctuation like '[' or '-' into whitespace, so that's how your '16*' is matching the date. I never knew you could prefix with '^' to say 'start of string only'! I should think of a nice 'hydrus' way to integrate that. >>14683 Sorry, we have had a bunch of multi-monitor issues recently. Newer versions of Qt are having trouble doing screen coordinate calculations for some users, and I am hoping to have it magically fix when I next update Qt in the build (although I know the newer version still breaks some stuff for some people). I added a BUGFIX setting in v563, under options->gui, called 'Disable off-screen window rescue'. If this helps you at all, please give it a go. If this continues to be a super pain for you, running from source lets you try different Qt versions, so you may be able to discover a better version for you. Check this document out if you want to think about it: https://hydrusnetwork.github.io/hydrus/running_from_source.html
>>14643 >make sure you delete the login script and cookies for those sites first. I'm confused. Doesn't Hydrus need my login script and cookies to for subscriptions on account locked content to work?
When a sub gives me notifications on a number files it just added, is there any setting I can use to make it not count files I already have? I keep getting notifications that are essentially about how Hydrus found new urls for hashes I already have.
>>14695 all the login script is doing is dealing with cookies. you're bypassing the need for the login script by import the cookies yourself via the companion (the latter is more reliable)
>>14692 i'm a very casual nooby. is there any risk of my breaking/overwriting my local tags and stuff by connecting to the PTR?
>>14694 regarding the TBIB breakage, it sounds like it's on their actual site itself, so IDK if you can correct things on the hydrus end or not.
>>14698 If you run the 'add the PTR' command it makes a new tag repository, no worries there. The main issue is that the PTR is huge, it's like 60gb and takes a week or two to fully catch up since it has to download and process in order.
>>14698 No. by default there is only a "My Tags" service. If/when you add the Public Tag Repository (PTR) if will create a second tag service, unsurprisingly called "Public Tag Repository". It's completely separate from My Tags. You can find out more at https://hydrusnetwork.github.io/hydrus/PTR.html As >>14700 says, it does take a pretty good chunk of space. And only do it if you have hydrus running on a SSD. If you have no interest in the PTR you can manually make parent/sibling associations on the default tag service. It'll be a little bit of work, but could save you a lot of time. Parent tags in particular. Tag parents let you automatically add a particular tag every time another tag is added. (anus adds rating:explicit automatically as in my prior example.) Tag siblings let you replace a bad tag with a better tag (lotr gets replaced entirely by "series:lord of the rings"). Migrating siblings/parents from PTR to another tag service just copies them over for use in another service. For specifics see: 1) https://hydrusnetwork.github.io/hydrus/advanced_siblings.html 2) https://hydrusnetwork.github.io/hydrus/advanced_parents.html
>>14697 Companion says it successfully got the cookies I sent while logged into Pixiv, but my pixiv sub that I made and ran immediately after still misses everything R-18.
>>14702 a few things: 1. delete the login script as discussed 2. reset the login cookies under 'manage logins' 3. on the pixiv site, change your account region to japan 4. make sure you're using the pixiv tag downloader for tags and the pixiv artist id downloader for artists. these are the downloaders that come with hydrus by default. 5.make sure your query is good. the pixiv search will autosuggest the japanese version of your tags as well, so maybe search for that instead since the results can be better 6. give it time. there's A LOT of sfw & otherwise garbage images depending on what your query is.
>>14703 >1. delete the login script as discussed >2. reset the login cookies under 'manage logins' I have no idea where to find these, or if I'm supposed to be looking for them in Hydrus, Hydrus Companion, or my browser's developer mode. I've looked for a bit through all three. I'm using Ungoogled Chromium which is one of the suggested browsers for Hydrus Companion >3. on the pixiv site, change your account region to japan >4. make sure you're using the pixiv tag downloader for tags and the pixiv artist id downloader for artists. these are the downloaders that come with hydrus by default. >5.make sure your query is good. the pixiv search will autosuggest the japanese version of your tags as well, so maybe search for that instead since the results can be better >6. give it time. there's A LOT of sfw & otherwise garbage images depending on what your query is. All of this is fine. I'm using the artist ID downloader, and I'm successfully getting only the SFW art by any given artist. My issue is purely not understanding what step between >Send cookies from this site to Hydrus and >Make a subscription like usual must be taken and how. I've solved the issue. I was in incognito mode out of long ingrained habit, and that effects the cookies. Sending cookies from a regular window and then making a subscription functions correctly and gets R-18 content, contentious tags and all.
I might be super autistic, or just biased towards other boorus, but whenever I add a new namespace and it comes in-between "series:" and "character:", it bothers me to no end. I wish we could have a custom namespace order for tag display. I know you can do a dirty hack by prefixing namespaces with a number to get a custom sort order, but it makes them look very grotesque. And hiding them is like brushing a turd under a carpet. Also, hiding namespaces of the same colour like "page:" and "chapter:" that 99% of the time contain only numbers, makes viewing tags with them very confusing (pic related). So hiding/showing individual namespaces would be a cool feature. >inb4 just change the namespace colour
>>14705 >So hiding/showing individual namespaces would be a cool feature. You can already do that under tags > manage tag display and search.
>>14706 Sorry, I didn't mean hiding the entire namespace with subtags, just showing some namespaces, when this box is unticked. So when everything else is hidden, namespaces like "page:" would still be visible. I know it sounds stupid, but seeing only numbers (instead of "page:5" for e.g.) with all namespaces hidden, is not very informative.
My subscriptions are getting a little unwieldy. I have 120+ now and I still have many more to add. Is there any way I can group them up? My subs are all artist subs on various sites for each artist. Just being able to group by artist I could put 2-5 subs per group.
>>14627 they were deleted manually years ago, at least a number of them were, I believe at this time there was no way to find them in program, or I had no idea how to find them in program, essentially it came down to I needed to buy time to new hdd because I was completely out of space and could not parse enough fast enough to make it to the finish line. nah, its not my storage that messed up or anti virus, it was my dumb ass desprate to make it to new hdd. like I said here >>14494 this is entirely a problem of my own making and I have 0 idea if anyone else was dumb/desperate enough like I was to do what I did and no failsafe was in place for it... I do have an idea that potentially could work. I could run a bare client and redownload all the files that was, then merge them into the main clients folders, think that would work or would there be a way to reset this in the main client?
Having a minor issue with Hydrus companion. If I send a whole pixiv page to Hydrus with the send current tab button, it works just fine, but if I send just one file through right clicking, it's ignored with a 403 note.
>>14710 You can't direct link pixiv images.
Would it be reasonable to have the "copy small bmp of image for quick source lookups" work for videos/animations as well? Such as by grabbing the same frame used for the thumbnail, or the thumbnail itself?
Is the deviantart parser broken for anyone else? Tried importing cookies from my account to hydrus, still doesn't work
>>14712 What source lookup site or tool works even with videos? I'm still eagerly awaiting partial duplicate processing functionality for videos that uses the first frame of a video while ignoring any videos that have a solid color first frame.
>>14713 Did you import the cookies from an incognito window? That breaks it, as I've recently found.
https://nekohouse.su/ just popped and seems to use the exact same structure as kemono party, made to be more JP art centric and go down less. You think kemono party downloaders will work for it out of the box since it appears to be the same site infrastructure, or will new downloaders need to be made?
>>14713 just use gallery-dl and set up an import folder with sidecars
I had a good week. A bunch of system predicates now support customisable +/- ranges, both by absolute and percentage values. The release should be as normal tomorrow. >>14694 >>14699 Thanks, yeah, I looked just now and it seems the site is 302 redirecting to the main index from a variety of normal Post URLs. It works in the browser too, for instance this, https://tbib.org/index.php?page=post&s=view&id=13954530 I've seen this sort of thing before in some boorus to block guest user access to spicy content. I don't know if TBIB hosts this sort of thing, but maybe it is similar. Maybe you need to make an account and click a 'allow x content' flag somewhere. Normally though, when this sort of thing happens, they won't show you the thumbnails though, they'll either obscure them or hide them from results completely. Maybe the site is just sperging out.
>>14720 i'm logged into TBIB and have an empty blacklist, but it's still redirecting me to the main index page for the link you posted. they must've broke something
I heard someone mention kemono blocks scrapers and that you need Hydrus Companion to send to cookies to Hydrus to circumvent this. How will I know it's working though? Does kemono immediately recognize my subscription and block it without cookies, or does it take a certain amount of fast activity to trigger it? How do my cookies prevent these triggers?
>>14722 >kemono blocks scrapers I haven't heard this but I'd assume that if they do block scrapers, it's because they already have a public API. just use that. works for me with no cookies needed
https://www.youtube.com/watch?v=Oo0o84-TJTU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v564/Hydrus.Network.564.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v564/Hydrus.Network.564.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v564/Hydrus.Network.564.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v564/Hydrus.Network.564.-.Linux.-.Executable.tar.zst I had a good week, several system predicates have better range-based searching. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights The system predicates for width, height, num_notes, num_words, num_urls, num_frames, duration, and framerate now support two different kinds of approximate equals (โ‰ˆ): absolute (ยฑx), and percentage (ยฑx%). Previously, the โ‰ˆ secretly just did ยฑ15% in all cases, but now you set how and how far they go. Also, any 'system:framerate' predicate that was '=' will now be converted to ยฑ5%, which is what it was secretly doing before, and any 'system:duration' '=' predicate will also be converted to ยฑ5%, which is what it really should have been doing before. 'system:duration' also allows hours/minutes input, for longer videos. This predicate overhaul was an important cleanup job, replacing a ton of hacky ancient code with something that is easier to update and maintain. I've collapsed all these preds down to a lot of shared UI and logic, so let me know if there are any display/search quirks, but once we have it nailed down, I hope to replicate this work for the more complicated system predicates. I reworked what 'Space' does in the media viewer by default. I am updating existing users too, so you'll probably get a little popup about it when you update. Essentially, if you are still on the default shortcuts, Space will now only send 'pause/play media'. It no longer does 'yes' on the archive/delete filter or 'this is better' on the duplicate filter. If you want to go back to how it was, sorry for the trouble--hit up file->shortcuts to set it back. Thanks to a user, space also does a new 'Quick Look' for macOS users on thumbnails. Try it out! If you are a 'running from source' macOS user, make sure to rebuild your venv this week, or it won't work! next week I'd like to figure out incremental tagging on the manage tags dialog, so you can select 20 files and tag them page:7 through to page:26 in one step. Let's see how it goes.
>>14726 >I'd like to figure out incremental tagging on the manage tags dialog, so you can select 20 files and tag them page:7 through to page:26 in one step. Let's see how it goes. This would help me with some filename tags. For ordering and exporting purposes, I keep filename tags on ordered sets of images, but with subscriptions, lots of ordered sets could come in out of order with no filenames. If I could select a group of files I had just manually ordered and then tag that "filename:big-ol-tiddies 0" through "filename:big-ol-tiddies 324", that would be great.
Would it be feasible to add a function for editing pictures without following the export, edit, import, set this file as better, and then delete the original, process? A sort of one button solution that tells Hydrus "I just edited this file, attach all the tags to the new hash and trash and/or delete the original."?
>>14726 Thanks for the helpful "I shuffled around mappings" warning at first launch. Thanks for not being a "fuck you, read the whole changelog" dev and caring about UX, anon! Even paid, proprietary software usually gets that wrong.
can anyone save me some trial and error and tell me what the ideal bandwidth settings are for lolibooru? i keep getting pauses due to "serious domain errors" and i have it set to 1 request every 3 seconds.
>>14732 I have one request a second and 10gb a day and rarely if ever have problems. Could it be an overall network issue on your end? Say your network card goes to sleep, could that trip Hydrus' "domain error" flag?
>>14733 i'm not sure, i don't have a dedicated network card. i use a VPN and stream stuff, so surely I would lose internet connection if my "network card" or the equivalent was going to sleep. i don't appear to have this issue on other domains either, just lolibooru. lolibooru doesn't seem to block my VPN either, so i don't think its that
>>14671 >If you have had problems with mpv sometimes Hey hey, anon from >>>/hydrus/19467 here - I was still reproducing the issue sometimes, without being able to figure out anything more about it that I could share with you, so I didn't message again about it. It just happened again with my client fully up to date, so I was able to verify that the debug -> gui -> isolate mpv thing fixes my issue without restarting the client. Thanks devanon!
Hydrus crashes when I drag and drop stuff into firefox. Has anyone else experienced this? Arch/Wayland.
It's probably a dumb error on my part, but I can't for the life of me get hydrus-web/hydrus.app to work properly. Every other thing using the client api works just fine so I don't really get it. I just get "0: Unknown Error" for every request. Any of you anons run into & fix this issue? This happens regardless of local or remote, so 127.0.0.1, and I can access the client api page from my browser, CORS support is enabled too.
>>14696 Yeah, I call it the 'presentation options', which is inside 'file import options'. It just governs what a downloader broadcasts when asked to show what it has. You can set the default file import options under options->importing. I like to set my subs (and downloaders tbh) to 'new files'/'must be in inbox', and then I'm just looking at legit new stuff. >>14705 Yeah I totally agree. Unhelpfully ordered namespaces drive me nuts too. I always want namespace x, y, z at the top. I'm going to do a little some tag sorting work this week, maybe it is time to finally pull the trigger here. Hiding/showing some namespaces is something to consider. Maybe I can figure a hacky system that doesn't eat up too much CPU to actually calculate. >>14708 For technical reasons, it is much easier to group subscriptions by the shared downloader. Hydrus wants to have subscriptions like 'safebooru artists', where you'd have 60 artists all on safebooru. You might then have 'danbooru artists', which a similar artist list, all on danbooru. Nested GUGs, if you know what that is, allow you to play around and merge stuff, but it can get complicated and the juice generally isn't worth the squeeze imo. The tools you want are the 'merge/separate/deduplicate/lowercase' buttons in the edit subscriptions dialog. I recommend you gather all your subs on site x and y and then click 'merge'. It should collapse them all down, and they'll then all share the same options, much easier to manage. If you have done some complicated NGUG or custom downloader gubbins to somehow gather 'one artist, many site' subscriptions, rather than 'one site, many queries', I haven't got a good solution other than moving to the simpler technical format. Let me know how you get on! >>14709 Ah, thanks, I see your problem better. It is odd that the client is not fixing itself in your case. Normally, if update files are simply missing, it'll run its scans and figure out what it needs to redownload. I'll do some tests here, maybe I broke this scan recently. Yes, you can absolutely set up a fresh client extract on your desktop, add the PTR, and then export/import those update files. I think and hope it'll fix you right up. Another option is to get the quicksync download, here: https://breadthread.gay/ which I assume only has the update files in its file store, but it is 30GB, so maybe more trouble than just setting up a new one yourself, since you just want to download the files, not save time on processing.
>>14712 >>14714 I know some Apps that do video de-dupe, but yeah they all rip a more complicated 'similar files' hash than just one bitmap frame (most basically just grab a phash for every nth frame). I'm going to do the same when I implement this myself. If you are feeling clever, you might want to explore this, which does external video de-dupe via the Client API: https://github.com/hydrusvideodeduplicator/hydrus-video-deduplicator >>14716 I just did a little test myself, copying the URL template and trying https://nekohouse.su/api/v1/fantia/user/292282/post/2528700 https://nekohouse.su/api/v1/twitter/user/shrimp3528_mmd/post/1760846417289937043 https://nekohouse.su/api/v1/fanbox/user/66119767/post/7392693 instead of https://kemono.su/api/v1/patreon/user/3506306/post/52615502 but it gives me a 404. As you say, the site looks like it is using the exact same engine, so maybe they just haven't turned the API on yet. Or maybe the kemono/coomer guys are a separate outfit and hacked their own API together? Or maybe instead of 'fantia'/'twitter'/'fanbox', they use different site codes for their API. >>14727 Hell yeah. >>14728 Yeah, in future I'd like this. It'll work when we eventually add 'convert this ye olde shit to AV1 mp4'/'Jpeg XL' tech, too. I need to write better metadata-merging tech first. The shit in the duplicates system is go unwieldy atm, I hate it. Once that is better at doing automated stuff, and is generally nicer to work with, I can see us going in this direction. >>14730 Great. I'm slowly learning this myself, so let me know when I do get it wrong--it is often surprising, being on the inside, what most affects people on the outside.
>>14735 Great, thanks for letting me know. This fix is a bullshit hacky thing, so let me know if it starts giving you other trouble. We are basically just opening up new copies of mpv every time that command is set. There's another guy who is going to put time into mpv, maybe figure out non-crashing mpv window destruction, so fingers crossed we'll have nicer solutions here in future. >>14739 I've been working with a guy whose client freezes when he tries to drag and drop page tabs, but a crash when dropping things onto a browser is new. When you say crash, do you mean the program actually halts instantly and disappears, or do you mean it freezes/hangs indefinitely? Unfortunately, when I initiate a drag and drop, I kind of give up a bunch of my ownership over what's going on. I basically bundle some data up and attach it to a 'clipboard' on the mouse for the OS, and the OS handles the rest, so it can be really tricky to debug this stuff. I assume you are ok dragging and dropping files to other programs, and between hydrus page tabs? On Windows for a long time, discord has been a pain in the ass, because DnD on Windows has some unusual security/permission stuff going on in the background. A 'move' DnD will work, but a 'copy' one won't. I wonder if something similar is going on with you, although normally the result would be 'fail' than a full crash. That said, I'm sorry to say, I added a section to the 'getting started' help this past week basically saying 'Wayland does not work well with hydrus (Python Qt) right now, sorry' since we have so many problems with embedding mpv windows and stuff. Some things just seem to be broken, and our only hope is waiting for a new Qt/Wayland version to fix them. That said, I'd like to fix you if I can. What happens if you try to drag and drop a page tab onto firefox? It should do nothing. Can you drag and drop page tabs to reorganise them inside hydrus, or does that cause problems too? Can you drag and drop files from firefox onto hydrus (it'll probably give you a bullshit bitmap in your temp dir, so don't try to import it, but does it work at least?)? >>14740 The guy who makes that is active on the discord, if you would like one-on-one help from the expert. I know some people have had trouble setting it up, and the solution tends to be variants of 'oh, my OS has some bullshit firewall', so going to 127.0.0.1 is the typical first step to reduce the possible number of walls in the way, but you've already done that. If you put https://127.0.0.1:(your client api port)/ into your browser, do you get a naked ascii lady? (you might have to click through an ssl warning page) The 'open client api base url' button in review services should do this for you. If you get the lady, then you have visibility, and I'd have to guess the hydrus-web setup is some bullshit like http vs https or something.
Is there a way to connect a Hydrus client to a database hosted on a different PC? I run my Hydrus off an Ubuntu server and I already have remote access to it through hydrus.app, but when I'm at home I'd like to be able to use the normal Hydrus client to access it. The github docs seem to say it's not possible, (the "don't run a database from a network location" thing) but I want a second opinion before writing it off. I don't even know if Hydrus can do multiple clients accessing the database at the same time. I mainly want this so I can do some much needed de-duping without having to hook up a monitor to the Ubuntu machine or have to do it all in a VNC window.
hydrus is great, but the sankaku downloading sucks. what's the best alternative for downloading from sankaku, or does it all suffer from the same issue that hydrus does of needing to constantly refresh headers every 48 hours?
Is there any way to easily copy hydrus settings from one database to a new one? I have it set up as I like it but it's a lot of options to change on a new database... talking about options and shortcuts mainly
>>14741 >'presentation options', which is inside 'file import options' Danke. >I like to set my subs (and downloaders tbh) to 'new files'/'must be in inbox', and then I'm just looking at legit new stuff. This gets me halfway, but that still gives me notifications every time it finds something I already have, but have yet to process and archive. Would it be feasible to add an option for notification only if a "new" file is completely new, i.e. "not already in db"? I don't see an option that fits this.
Forgive me if I'm just retarded and overlooked this in the doc, but am I able to essentially "reverse search" my existing, unorganized image collection in say gelbooru and automatically tag them all? I see tag options for different sites but will the program actually pull the tags from the copy of the image on the booru, or will I just need to manually tag them? Would prefer this as opposed to the AI tagger I see people mentioning.
>>14748 see >>14385 you could also look into installing the PTR, since most files from boorus will be in there.
Looks like Nitter is kill. There goes my twitter artist scraping. https://github.com/zedeus/nitter/issues/1175
>>14746 I would like to know this too, as I would like to move some files to a separate one purpose db. The only thing I thought of would be just copying the db folder without the files and then clean up the missing files and tags and such from inside, but that might not be very clean.
Hydev, >>14495 again, I'm having the issue once again, this time I definitely don't have an x11 client limit causing the problems. I have seen the mpv media viewer floating occasionally, usually it's when hydrus has frozen (I need to de-clutter my session), the mpv player still plays but I can't interact with it. I haven't seen mpv fully detach from the UI like you mentioned in >>14551. Reading the changelog for 563, I don't think the mpv debug option applies to my issue, but I tried it and it didn't fix the issue unfortunately.
>set gallery download with a few thousand images >forgot I didn't empty trash, eventually get full disk I/O errors >Hydrus handles it by pausing and tells me how to unpause in the error message Perfect, this probably saved me a whole lot of headaches. I've had programs that become unusable due to disk space errors before so this is refreshing.
>>14750 selfhost with an account
>>14755 I'm retarded, so no idea how to do that.
>>14741 tell me if you you see something broken, ideally I can do it all from in my own client, but if you dont find anything ill set up a temp client to download from ptr as for that file, little iffy because at some point hydrus wanted to recheck all prior repository files, so if it wants to do that again its just going to break in the same way, it may solve the problem for now, but its kicking the issue can down the road.
>>14755 that could get you banned and also has no privacy advantages
God damn, kemono has damn near everything. I should have been browsing this and making Hydrus subscriptions with it before looking at boorus.
I had a simple week, mostly fixing a bunch of bugs. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=lgpD6OsHCKU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v565a/Hydrus.Network.565a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v565a/Hydrus.Network.565a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v565a/Hydrus.Network.565a.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v565a/Hydrus.Network.565a.-.Linux.-.Executable.tar.zst I had a simple week. Lots of small changes today. The update step may take a couple of minutes. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights You are going to get a couple yes/no dialogs on update this week talking about deleting some mis-parsed URLs. If you do not manually store weird data in your 'known urls' store, just click yes. If you have lots of URLs, the work will take a couple of minutes. options->sort/collect now has four different default tag sort widgets! You can set the default tag sort for search pages, the media viewer, and the manage tags dialogs launched off them. There's a new 'media' shortcut action 'copy file known urls', that copies all the known urls of your current media selection. Sidecars set to import to file notes now have an optional 'forced name' field, so if you ave a .txt file with only note text, no name, you can now force it. Some of the UI is less jank here too. The tag filter UI also got a little polish. There's less logic jank, better labels and tooltips, and you can now copy listed namespace entries to the clipboard and get something you can paste back in elsewhere. next week I did not get to the manage tags dialog incremental tagging thing this week, so I'll try again.
>>14762 can't wait for the incremental tagging, that'd be gigasick and very helpful perchance
>>14745 Each gallery page body contains all data required to generate permanent static post and file urls without resolving the session-dependent dynamic urls. I just do not know enough js to make a userscript that would fix the links. Also there's still throttling that prevents mass downloading so you can't subscribe anyway. >>14504 >I solved this by just updating the gallery parser to provide old-style urls. Can anyone share it please or better yet give some pointers how to do it.
There is a pull request open to add read support for Jpeg-XL (JXL) files to Pillow. This is the image library that Hydrus uses, right? If this is added, would you be able to add JXL support to Hydrus, or do you need to wait for write support too?
anyone have issues with hydrus finding far few images for a tag on pixiv than is available on the site itself? I tried restarting the search from the URL before no new links were found, but to no avail.
(30.99 KB 629x348 md5.PNG)

>>14765 >Can anyone share it please or better yet give some pointers how to do it. The md5 is in the src attribute of the img tag of a thumbnail on the gallery page. You just have to parse it out of the url. Then you can avoid the new alphanumeric urls like https://chan.sankakucomplex.com/en/posts/60rvLD5GZM3 and instead make the older md5 urls like https://chan.sankakucomplex.com/en/posts/b427a29e4efbae0559946ff4ebf433b0 or https://chan.sankakucomplex.com/posts/b427a29e4efbae0559946ff4ebf433b0 or even https://chan.sankakucomplex.com/post/show/b427a29e4efbae0559946ff4ebf433b0 As far as I can tell there's no way of getting the super old number ids anymore.
>>14743 I just updated to Plasma 6 and am having similar issues. Does Hydrus work well under Xwayland?
>>14767 fixed it. retrying from the most recent gallery URL didn't find any new results for some reason, so I had to redo the entire gallery downloader to get it to find the rest of the files.
Was in the middle of updating a few versions at a time and thought I'd run an integrity check, it's been going for 20 hours now. Is this normal or did it crap out? DB is around 80gb, UI stopped responding but it's still reading the db constantly with a single rowid out of order error in the console.
>>14744 I know some guys who do this, talking to a database over a network connection, even on different platforms. I don't recommend it, but it _can_ work. A) don't run the same database from multiple clients at once B) be careful, make backups, have a reliable NAS connection My dream here is to write a proper interface using the Client API so you can just dial into another client within a client. I'll be able to support full file domain searching using a file domain that just happens to be on another client, and we'll live stream metadata and thumbnails and files as needed. We'll see how it actually goes IRL. >>14746 >>14751 Not really, I'm sorry to say. If you are feeling brave, you can poke around client.db with SQLite Studio and extract the dump_type = 6 json object in 'json_dumps'. That's the core 'newer options' structure, which holds about 98% of your options. Replace the one in the newer database, and I _think_ things will be mostly fine. You MUST make a backup beforehand if you try this, because it could well fuck up. Make sure it survives two options dialog open/close cycles and a restart and another two open/close cycles. >>14747 Interesting, I thought that filter setting was supposed to do exactly this. 'all files' does successful and already in db, but 'new' is supposed to just do 'successful'. I wonder if I am screwing something up here when it combines with 'inbox'. I'll check it out! >>14752 Damn. I was talking to an X11 guy earlier today who had an issue where one mpv window worked while the other was blank, and the new debug thing worked. Thanks for letting me know. Although I'm sure my code can be cleaned up, I think most of the answer here is to wait for newer Qt/X11/mpv/whatever library versions that don't clash so much. I'm still hoping to get a 'future' release with a newer Qt version out in the next month. I'm sorry that I forget if you are running from source or not, but that may be the next thing to try here. Running from source means one less layer between the code and your OS, which can often smooth out .so file conflicts and such. https://hydrusnetwork.github.io/hydrus/running_from_source.html Let me know if you try it out, and if so, how it goes! >>14757 Yeah, sorry for this. I had a good look and tried replicating various 'this update file suddenly went missing, what does the client now do?' situations, and I could not reproduce your problem. I think I cleaned up a little of the code in v565, but I don't think I fixed anything serious. I can't claim this is anything appropriate for you, but I've been working with a guy with a similar problem recently and we discovered his hard drive was completely fucked and had damaged his database. I think your problem is my code being dumb somewhere, but you might like to give 'install_dir/db/help my db is broke.txt' a skim to see if anything stands out. Maybe run an integrity check on client.db, which will be nice and fast.
>>14766 Hell yeah, I'm waiting and hoping for PIL to just one day support it. A guy I know was playing around with the Jpeg-XL plugin a while ago, he said it wasn't ready. If it is now ready, then yeah, what'll happen is: - PIL merges the pull - I see/someone tells me - I add that new PIL version to the advanced test requirements.txt and advanced source users play around with it. I probably only have to write about ten lines of code to get it actually working in hydrus. - I make the next 'future release' of hydrus use it, people test it - I update everyone to use it PIL is a great and very stable library (used to be python only, not sure if it still is), so I imagine this process, when it happens, will be super easy. We are mostly just waiting for them to do their thing. After that, we'll need some applications/workflows that actually produce Jpeg-XL, not to mention in general the whole internet community has to keep putting pressure on the big guys, Chrome and Firefox, to support it. They do not like the format, even though it is pretty much our best option, so we need to actually use it for stuff and start yelling about it, and we might just get a good end here. Think, if you are old enough, to when 4chan added webm to /gif/. If the nerds start using Jpeg-XL instead of wanged out HEIC or whatever, then the browsers will be forced to support it for real. >>14769 I have been hearing a variety of bad reports recently, I'm sorry to say. Whatever combination of Qt, mpv, python, X11/Wayland, and my shit code is causing a bunch of issues. Is XWayland a Wayland-like that runs under X11? I had understood X11 was generally well supported but Wayland was not, but now I'm getting reports all over the place. Maybe something important just updated somewhere, and it broke my mpv embedding? I recommend trying to run from source, as in my post above, but I can't promise much. Please let me know how you continue to get on, and if we discover an angle I can attack, I'll try and clean how I initialise or whatever is actually going wrong here.
(56.92 KB 449x464 10-17:46:38.png)

>>14772 >I'm sorry that I forget if you are running from source or not.. That's fine, I don't think I've mentioned it, I do run from source already, but I haven't updated the venv in a while, these are the libs, I'll try to remember to include them in the future.
>>14768 Yea I know where to get hashes, I don't know how to make hydrus use them. Hash urls give "temporary redirect" http response code pointing to new alphanumeric session urls. Is it even possible for hydrus to save the first url and add it to image metadata together with new alphanumeric url?
Regarding dreaded "site logo" placeholder image files that some sites push instead of throwing http errors, I have an idea / feature request: Manually designate already downloaded file as "error" 1) Deassociate urls, tags etc, maybe all metadata entirely 2) Ask user whether to copy and/or redownload all previously known urls for that file A more complex method that should work on sites with static placeholder: 3) Store file hash in a separate category, (similar to deleted files?) 4) If same file is downloaded ever again, treat it as failed download Would that be sensible and feasible approach?
(19.22 KB 917x379 12345.PNG)

>>14776 Works on my machine.
>>14778 Oh that's great. Now I just need to figure out how parsers work...
Hello, just a reminder about that small feature I really wanted where you could excludes a set of namespaces from being counted when you add a "number of tags" search-term. You mentioned a few months ago that you could hack up something quick for me in the next few weeks or so. I don't need the UI to look good, just a way to do it would be helpful for my tag maintenance. I mentioned when you asked about a temporary UI that you could just reuse the namespace text box that's already in that window, and use hyphens before the namespace to say "exclude instead of include" and separate them with semicolons. And if a namespace contains a semicolon, use backslashes to escape it. It's hacky but something like that would probably be easy for you to implement UI wise, and it would work well enough for me.
>>14772 For the bad url search, how many is "normal"? I got some 7900 or so and I'm a little concerned about it so I cancelled it. Is there a way to check the urls by eye?
>>14781 well for comparison at least, I got about 260 bad urls in my 260,000 file database
>>14782 I see, if my math is right, that's about 0.1%. my db is much larger, at about 3.1 million. so that gives me about .25% percent with bad urls. Thanks for the data point.
(14.01 KB 191x108 12-21:32:13.png)

(57.38 KB 406x150 12-21:32:25.png)

(54.50 KB 408x154 12-21:34:53.png)

I'm getting some weirdness with Japanese text here, it's not every picture, and it seems to be something with the title namespace. title:skebใ€ฐ & title:skebโ›“๏ธ๐Ÿ’™ as copied from hydrus. https://www.pixiv.net/en/artworks/116728128 https://www.pixiv.net/en/artworks/115026506

(12.14 KB 496x188 select.PNG)

>>14779 Here are mine. Here's what each of them mean: alphanumeric urls: https://chan.sankakucomplex.com/posts/60rvLD5GZM3 en alphanumeric urls: https://chan.sankakucomplex.com/en/posts/60rvLD5GZM3 md5 urls: https://chan.sankakucomplex.com/posts/b427a29e4efbae0559946ff4ebf433b0 en md5 urls: https://chan.sankakucomplex.com/en/posts/b427a29e4efbae0559946ff4ebf433b0 legacy md5 urls: https://chan.sankakucomplex.com/post/show/b427a29e4efbae0559946ff4ebf433b0 Use whichever one your subscriptions already recognize so it won't think everything's a new url. Pick which kind of url you want under network > downloader components > manage url class links, and then sankaku chan gallery page.
I had a good week. The 'incremental tagger' system is working, allowing you to tag page:3, page:4, page:5 ... page:17 and similar to a selection of files, and .docx files are now importable. The release should be as normal tomorrow.
>>14786 >incremental tagger Can you append the number to some other text?
(30.61 KB 228x186 13-16:50:10.png)

Feature idea: Let me set the outline of a file based on a tag/namespace such that rating:explicit has a red outline, rating:questionable has a blue outline. if namespace support was added you could have an artist be purple or something, obviously you'd be able to customize the colors. It would probably need a priority list to determine what tags override others. Alternatively, allow more customization on the little text display. My main idea is supporting shortening tags, eg rating:safe becomes r:s or something. Pic related isn't from my main Hydrus install, but on my main I have the rating show up as r:s,r:q, or r:e. Which I did by making rating:s my ratings on my local tag repo, but it sometimes still shows rating:safe if my local tags and the PTR aren't in agreement on the content rating. Also maybe support adding text on the bottom left side?
https://www.youtube.com/watch?v=PlvK2pabBqI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v566/Hydrus.Network.566.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v566/Hydrus.Network.566.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v566/Hydrus.Network.566.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v566/Hydrus.Network.566.-.Linux.-.Executable.tar.zst I had a good week. The long-awaited incremental tagger is ready, and the program supports some more document types. You will get a yes/no on update, but it isn't a big deal. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html incremental tagger When you open the manage tags dialog on several thumbnails, it now has a 'ยฑ' button that lets you tag the files 'page:1', 'page:2', 'page:3' and so on. You can set the namespace (or no namespace), the starting point (so you can start at 'page:18' if you need to), the 'step' (so you can count by +2, or even -1 to decrement), and even prefix/suffix for the number if you need to decorate with 'page:x (wip)' or something. I'm quite happy with how this worked out. There's some live text that gives you a preview of what's about to happen, so the best way to get to grips with it is to play with it. Just click and poke around, and let me know how you get on. Namespace will be remembered between opens, and if the first file in your selection has a number tag for that namespace, it'll set the 'start' position to that value. If you are setting page tags to a bunch of chapters, or gaps in a larger body, a bit of prep/overlap may help things here. We've wanted this for years and years, and while I'm not expecting the program to handle paged content beautifully now, this is a decent step forward. docx and friends The document types .docx, .xlsx, and .pptx, which are the newer Microsoft Office 'Open' formats, are now recognised by the client. These are secretly zips, so there's a chance you have some in your client already. On update, you'll be asked if you want to scan your client's existing zips to see if they were really one of these. You probably don't, so it isn't a big deal, but if you think you do, hit yes and they should appear. next week Cloudflare are rolling out some annoying new cache-optimising tech that is causing file dupes, so I need to work on URL handling so we can patch the most-affected sites. >>14787 Yes, prefix and suffix! Let me know how it works for you.
(30.80 KB 360x360 pony clapping.gif)

>>14790 >incremental tagger Awesome. Thanks.
>>14270 I really don't get the Parser Formulae tutorial... Is there a more step by step one available ?
>>14773 >Whatever combination of Qt, mpv, python, X11/Wayland, and my shit code is causing a bunch of issues. At this point it might be worth it to have a workaround option to treat video files as "open externally" only like Flash player files as it's better than having these dead mpv windows spawning every time. It might be worth someone's time to make a minimal python-mpv program that has the same bug to throw at whoever's fault it is upstream.
(209.41 KB 1253x683 openexternally.png)

>>14793 You mean like this? >options -> media -> video
>>14790 >incremental tagger Fucken schweet.
Just got a >Hydrus grabbed over 100 files from a sub, either there's a fuck ton of new files, or there's been a url change If there was a url change, all these files would have the same hash as previous files and not need to go through duplicate processing, right?
>>14796 Yeah, but you will still have to download all of them to calculate the hash, unless the source page has file hashes somewhere and the downloader checks for them.
Is it a sorting issue with Hydrus or Hydrus Companion that causes files to be imported out of order when I use "send all tabs to the right to hydrus"? >>14797 Thanks. Seemed like that made the most sense.
thanks for adding the incremental tagger feature, been wanting that for a while! question, though: i usually sort my thumbnails from latest to earliest in the viewer, so a 'tag these files in reverse order' checkbox in the incremental tag dialog would be super useful. (some shortcuts for the dialog/action would be nice too!) also, any rough ETA on the parser rework/possible gallery-dl integration? i like to download a lot of art from twitter and the like, and seeing as nitter recently broke and gallery-dl/hydownloader are a pain to use compared to hydrus, it would be nice to have a built-in solution for that
>>14799 >i usually sort my thumbnails from latest to earliest in the viewer, so a 'tag these files in reverse order' checkbox in the incremental tag dialog would be super useful. (some shortcuts for the dialog/action would be nice too!) Couldn't you just reverse the sort from newest to oldest with two clicks before incrementally tagging? I find it's the most common sorting change I regularly make.
>>14762 >You are going to get a couple yes/no dialogs on update this week talking about deleting some mis-parsed URLs. If you do not manually store weird data in your 'known urls' store, just click yes. If you have lots of URLs, the work will take a couple of minutes. Can I check which files it were? It found 15 files and I picked no, because I would want to check manually myself. What would the search be like?
>>14801 I think I got it. I simply did ".http" in regex url search.
I was looking to set this up to import various collections, doujins, etc. Is there a way to save page orders within cbz files, booru pools, numeric file names within folders, etc? Are there options to search by names of collections, doujins, etc? I see options for scraping tags from boorus or even the public repository so even losing a doujin name isn't as big of a deal, but I don't want to re-order everything from scratch.
>>14803 You can use filenames as tags during import, which you can prefix with a namespace. So for example if your files are named like 01.png, 02.png etc., you can turn that into page:01, page:02... on import. There's also a counter that will tag your files in the order they are imported by a specified increment, which may be useful for files that don't have numbers as filenames, but you would have to probably import these folder by folder or maybe it applies to a selection in the ui. You can also import folder names as tags, either by using the same premade system like filenames, which lets you use 3 first and 3 last folder names, or you could use regex to format the entire path (including filename) into anything you want.
>>14804 Perfect. My current general folder structure for collections is >artist_name/comic_name/page_number.jpg So I can regex the path to import those as the following? >artist:artist_name >title:comic_name >page:page_number Or whatever the respective tags are?
>>14805 Yeah. If your folder structure keeps this exact pattern everywhere, then you don't even need to use regex, you could simply use the premade fields like in picrel. By the way, use "creator" instead of "artist", that seems to be the standard in Hydrus. Of course, regex can help you do extra shit, like ignoring leading zeroes for pages, if you don't want them and so on. Regex and the incremental counter are in the advanced tab.
https://hydrusnetwork.github.io/hydrus/server.html https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html I'm currently using Hydrus on my desktop, but want to migrate everything to a headless NAS on my same home network. I only need to view media from my phone, but from my desktop I'd like to import files and modify tags without saving a full database locally. Does the server instance support that, or do I just keep using the client instance from the NAS and remotely connect to that?
>>14774 Unfortunately, these libraries all look great! So, we can't easily blame this on 'update your venv m8'. Well, let's hope an upcoming version of Qt or whatever smooths some of this out. >>14777 Yeah, these files are tricky. I quite like the idea of having a special table that just lists, 'oh, this file is a login placeholder' or whatever, although what to do next is not so simple--I'm probably bumping it up to human eyeballs no matter what. That said, an important push I want to get to fairly soon in the downloader system is proper 'retry later' tech. For individual file imports and domains in general. If a site is overloaded, a download queue should detect that and try again later. If we had that sort of tech, we'd have more controls here and could maybe offer a nicer lever to the human that said 'try again in eight hours' or ' once you have logged in' or whatever. I think I'll have to keep this in mind. >>14780 Thank you for the reminder! >>14781 >>14782 >>14783 Yep these seem fine, I got something like 60k bad URLs on 3.5m total and I deleted them. >>14784 Thank you, I'll see what is going on! Looks like zalgo stuff, probably some crazy unicode characters messing up the coordinate calculations. >>14789 Hell yeah, I super want this. If you search previous /hydrus/ threads for I think 'Metadata Conditional', you'll see me talking about my general ideas here. I want an object that eats a file, examines its metadata, and spits out True/False, and then I can tie that into a dozen places all over the program, such as 'make the border red when X'. Then I won't have to hardcode all this logic, just the metadata conditional object and the priority queue of colour tests.
>>14792 Sorry, that's as detailed as things get I think! I can answer questions here if you like, or you can email me, or hit me up on discord so we can talk live on a Saturday, or if you'd like you can just write a little about what you found difficult/confusing with the language in the help document and I'll see if I can make things clearer. Also, if you can say more about your background with web tech like HTML/JSON and if you are at all ESL, that helps. >>14799 >>14800 I won't go more complicated than this, since I don't want the new tagger to get too many bells and whistles, but sure I will add a checkbox to say 'apply from last to first'. I can see how this would be annoying if you are by default set to that sort of file order. I was going to say 'if you feel clever you can set the step to -1', but then you'd have to fiddle around figuring out what your last number, the new starting position, is, and boring math shit is what computer code is for, so I'll do it. Let me know how it works. >>14801 >>14802 Yeah the regex I use in the update is: http\S+\s+http Since you only have 15, you can delete those manually, no worries. >>14803 >>14804 >>14805 WARNING FROM HYDEV: Although hydrus has 'page' tech, and I even did this incremental tagger thing last week, I don't really like how hydrus handles paged content. I've never been good at handling it, hydrus is optimised strongly for single media, so: please do play around with this tech on a couple of comics, but don't jump in with both feet and import a hundred comics and committing to hydrus 100%. As a new user particularly, you might find it a complete fucking mess, and I broadly recommend just staying with ComicRack or whatever, which will have nice bookmarks and all the stuff you want for actually consuming a comic. >>14807 Server does not support that. You want to set up a client on the NAS and then VNC or Client API into it. This will not be a beautiful way to run the client, so don't marry yourself to it before you try it. There's a Docker release of hydrus here: https://github.com/hydrusnetwork/hydrus/pkgs/container/hydrus if it helps, but I know almost nothing about Docker, so I can't say anything too clever about it. Let me know how you get on!
>>14789 >Alternatively, allow more customization on the little text display. My main idea is supporting shortening tags, eg rating:safe becomes r:s or something. Couldn't you theoretically set these short tags as parents to the longer tags and hide them from multi/single media view? Hidden tags will still show up on thumbnails I think.
Please add an "Unsaved changes, are you sure you want cancel?" dialog when clicking cancel on the manage url classes / manage parsers / etc. dialogs... I just modified a ton of url classes and then accidentally clicked cancel and now I have to do it all over again...
(4.21 KB 512x125 twitter file urls.png)

I noticed that some of my files from boorus had source twitter urls that weren't original size urls. So I made some file url classes for twitter image media. They cover 3 styles of image media urls: >normal https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY?format=jpg&name=orig >alt https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY.jpg?name=orig >old style https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY.jpg:orig There are 3 url classes for when it's an "orig" url (original size) and 3 url classes that cover any non-orig urls that may be resizes. There are a lot of different non-orig urls. I've seen small, medium, large, 360x360, 900x900, and 4096x4096. Also, I noticed that the non-orig urls actually aren't always lower resolution. For example, https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY?format=jpg&name=orig and https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY?format=jpg&name=4096x4096 and https://pbs.twimg.com/media/Er6DNJ0XYAEhdoY?format=jpg&name=large are all the same file. It depends on the particular file. After I went through all my images with resized urls and redownloaded them as orig urls, only 15 out of 187 were actually higher resolution.
>>14762 >>on rule34.xxx and probably some other places, when the file has multiple source urls, the gelbooru-style parsers were pulling the urls in the format [ A, B, C, 'A B C' ], adding this weird extra string concatenation that is obviously invalid. I fixed the parsers so it won't happen again >>on update, you are going to get a couple of yes/no dialogs asking if you want to scan for and delete existing instances of these URLs. if you have a big client, it will take some time to do this scan. the yes/no dialogs will auto-yes after ten minutes, so if you are doing a headless update via docker or something, please be patient--it will go through Just to be clear: it will find files with 'A B C' and delete those, okay, but does it then add A, B, and C separately to the file's urls? I did a search with the regex "http\S+\s+http" from here >>14810 and I noticed that I have some files with the concatenated urls, but without the individual ones. As in just [ 'A B C' ], without [ A, B, C ].
>>14784 Hey, I looked at this just now and I'm sorry to say things all render good here. There are no hidden zalgo characters causing a coordinate shift on purpose, only the ใ€ฐ, โ›“๏ธ, and ๐Ÿ’™. These unicode emojis have had spotty (but increasing) support over the years, and in bad situations can make the 'fontMetrics' that calculate size and stuff sperg out, all depending on your OS/font/etc... Notice that when your system cannot find the characters (and gives you []), it calculates the size fine. I don't know the correct solution here, really. Qt is giving me the wrong size prediction, probably because the character mapping for ใ€ฐ and โ›“๏ธ on your system has some whack coordinate space (its bounding box is probably ( -20, -20 ) to ( +40, +40 ) or whatever, when it should be ( 0, 0 ) to ( 20, 20 ) ). I am not sure if I can even 'fix' the alignment, since I think it is already aligning in the top-left corner (it is just the 'top' of the text is too high up). I also wondered if I could make a debug mode that replaces unicode characters with [] or something, although I can't obviously find a lookup table to determine emojis vs regular kanji or whatever. EDIT: Ok, ChatGPT to the rescue, check out this monster regex: emoji_pattern = re.compile("[" u"\U0001F600-\U0001F64F" # emoticons u"\U0001F300-\U0001F5FF" # symbols & pictographs u"\U0001F680-\U0001F6FF" # transport & map symbols u"\U0001F700-\U0001F77F" # alchemical symbols u"\U0001F780-\U0001F7FF" # Geometric Shapes Extended u"\U0001F800-\U0001F8FF" # Supplemental Arrows-C u"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs u"\U0001FA00-\U0001FA6F" # Chess Symbols u"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A u"\U00002702-\U000027B0" # Dingbats "]+", flags=re.UNICODE) I'll add this as a DEBUG mode in 'tag presentation' or something. Please give it a go and let me know if it fixes the issue here, and also whether it adds ridiculous lag. >>14814 It will not add the separated URLs. The problem I fixed always had redundant URLs, but it seems hydownloader and some other places are producing this problem without the redundant URLs, and now we are seeing them. If the number is small here, I think you should click 'no' and fix them yourself. If the number is big, I guess let me know. I don't really want to do some ass-backwards retroactive update-update, or hack out some weird tool to fire this once-off, but maybe I should just suck it up and figure something out. >>14812 Sure, sorry for the trouble.
>>14822 Just a side/additional thing, this initial regex worked on the heart but not the chain because the chain was actually U+26D3 (chain) + U+FE0F (make the preceding shit an emoji). Let's see if these render differently: โ›“๏ธ vs โ›“๏ธ. There's all kinds of bullshit 'variation selectors' in unicode these days apparently. I guess this is how the skin tone stuff works. The eternal phoneposter strikes again. Anyway, what do you know, in your tag banner there, the chain displayed, but not the heart. So I bet what is happening here is your local font engine can't do the unicode, but it thinks it can do the emoji version, but then it fails to figure out good coordinates for it. I've adjusted the regex to capture this, but I'll bet there's fifteen other 'slightly tan skin tone, with liver spots' 'selectors' to eventually capture, so let me know how you get on. Again, the probable ideal end state here is for your Linux/Qt/Font/whatever to update its font tables to handle symbols better, but I imagine it will be an endless goose chase until the whole wretched international standards edifice collapses under the weight of pregnant-male-doberman-fursuit-polycule-with-2043-pride-flag-coloured-fireworks-behind.
Is there a way to configure Hydrus separately to use a VPN or proxy service? I don't see any network options for this. The documentation implies I should use a VPN on my computer to route all internet traffic, but that would just compromise my privacy as I connect to personal email, bank, and work stuff. At the very least is there a recommended 3rd party downloader that does support proxies I can integrate with Hydrus? I'm not sure running "torsocks hydrus_client" is a good solution long-term.
>>14824 Aren't there options on some VPNs to route only the traffic of specific programs through specific nodes?
>>14824 network namespaces
>>14792 >I really don't get the Parser Formulae tutorial... >Is there a more step by step one available ? I don't get it either. That said, while researching the answer I stumbled onto a humongous parsing clusterfuck called REGEX and it doesn't look pretty at all. I'm afraid spoon feeding won't work as those formulae hardly can be explained, but at least they can be studied. F https://en.wikipedia.org/wiki/Regular_expression https://www.amazon.com/Introducing-Regular-Expressions-Step-Step/dp/1449392687/
(13.31 KB 456x396 lowercased.PNG)

Minor issue with the system search predicate parser: it lowercases your inputs. Case actually matters in some cases, like with the regex in pic related.
>>14822 >If the number is big, I guess let me know. About 3000 files come up from the regex search "http\S+\s+http". >I don't really want to do some ass-backwards retroactive update-update, or hack out some weird tool to fire this once-off, but maybe I should just suck it up and figure something out. I realized I could export the urls as sidecars, edit the sidecars with a script, and re-import to add the urls. I also used the script to check how many were actually missing the separate urls. Out of the 3000, only around 50 were missing the separate urls, so I don't think it's a big deal.
Situation: Running in KDE Plasma (5/6, Wayland, would expect the same situation with X11) with a global menu widget. Normally Qt applications will put their menu bars there instead of in the window. With a global menu, Hydrus doesn't create its own menu bar inside the window (correct) but doesn't populate the global menu (not correct). As a result anything in those menus is inaccessible. The workaround is to remove the widget, launch Hydrus, then re-enable it. Can I have the best of both worlds or does this need to be fixed in Hydrus itself?
>>14831 Screenshots?
(52.50 KB 789x448 aaa.png)

>>14832 As picrel, but in the process of assembling the image I found Krita has the same issue on Plasma 6. Digging in a bit more it's a combination of Wayland issues, Plasma 6 issues, and the disjoint between Qt 5 and Plasma 6 (which uses Qt 6). Seems like it's not worth trying to address at this stage.
>>14835 >Plasma 6 and Qt6 Ah, the bleeding edge. I suppose you are using an unstable rolling distro and therefore bugs are expected. >Seems like it's not worth trying to address at this stage. Yup. A 'stable" distro may suit you better. Adopting the 'new' just because is new is sure recipe for trouble.
I had an ok week. I reworked some URL handling in ways that are mostly important behind the scenes and cleared some small jobs. It'll just be a simple release. The release should be as normal tomorrow.
(33.36 KB 205x153 20-18:32:37.png)

>>14811 That's what I did on my desktop in a sense, they aren't hidden from view, they're just what "my tags" uses as safe/questionable/explicit, although it doesn't apply everywhere because I use the PTR so you often get duplicates like pic related.
After I updated past version 565, my rule34.xxx downloads don't get the "rating" tag anymore. I think the fix for the concatenated urls broke that. Luckily it can be easily fixed: I just went to the "gelbooru 0.2.5 file page parser", selected the "rating tags" content parser, exported it to clipboard, then went to the "gelbooru 0.2.0 file page parser", and clicked import from clipboard.
>>14837 The v567 release is cancelled! We found a problem and I need to do more work on it. I will move it on to next Wednesday, sorry for the inconvenience!
I had a bunch of borked urls, I'm sure most were dupes but I'm super paranoid about losing any. I was initially going to just document what commands I used, but I decided to go the extra mile and make a shell script, I used bash, find, ripgrep, wc, and sed. I'm not very familiar with bash so it was fun to learn the basics. Please please please read the script before just running it and I take no responsibility for any issues that might occur. The script doesn't have much error handling but I tried to make it as informative as possible. It's probably full of antipatterns or whatever but it's a one-time thing so I figured it was ok, I've already spent way too long on this. It should mention any important information. when you run it, but you should probably read the comments anyways. https://pastebin.com/MPAKgvGS
>>14840 Take your time anon. So far v566 is working fine. Thanks.
If I want Hydrus to still occasionally check "dead" subscriptions, is there any reason to not go into checker options and raise the dead sub limit from the default 180 days to something like 365 or 730 and set "never check slower than" to the same high number?
>>14720 is tbib still having this 302 redirect issue? i tried it for the link you gave and it still redirects, but i can't reproduce the issue for any of the other images i click on the site.
>>14838 Oh so it's the other way around. I don't use the PTR so I'm not sure how it works, but does it allow you to set personal parents/siblings for the PTR tag service or do any changes get merged? If you can, you could just put your personal short ratings as siblings to the long ones, so that yours get displayed instead of the long ones, event though they still exist and are searchable. Not sure if that affects the thumbnails though.
>>14836 On the subject of Plasma I found a minor bug but haven't checked if it can be reproduced in a clean environment yet. If you change colourscheme back and forth between BreezeClassic and BreezeDark the menu text / background doesn't change properly: plasma-apply-colorscheme BreezeClassic plasma-apply-colorscheme BreezeDark Restarting Hydrus will fix the colours. Other Qt applications don't have this issue but I'm not sure how they react to colourscheme changes properly. It's a very minor bug all things considered. Just something I found by accident because I have some software that changes the colourscheme during sunset/sunrise.
Wanted to download a few threads off 4chan archives (archiveofsins.com and archived.moe) using my custom downloader I made a while ago to get catboxed AI images, but they seem to have started using some cloudflare shit now and I'm getting 403s even if I copy cookies. How can they still block me if I use the cookies, even though browser works fine? Is there anything else I can do?
>>14848 Cloudflare turnstile is the devil. They use JavaScript and CORS checks on every page load now and do so very aggressively. Just to read some text and images you now have to prove you're a human. I wonder if one day Hydrus will have to communicate with a headless Chrome or Firefox over web driver in order to even be able to fetch anything. These can be detected too but can also be bypassed. It's a constant game of cat and mouse.
>>14849 Damn. I guess I'll have to return to just downloading the threads one by one as html files, bulk regex filtering the links I need and putting them as images into a new html file which I then save with the files.
>>14850 If you use the Hydrus Companion extension I think it can send links directly to Hydrus (the direct link to the image, not the HTML page). The only downside is it won't have URL Associations that way unless you manually add them.
>>14848 >Is there anything else I can do? Are you setting your User-Agent header as well?
It seems to be impossible to switch between tag manager tabs using the keyboard.
>>14851 Well that's kind of useless for entire threads. >>14853 Only the default one. I don't know how it's supposed to look like. >>14854 Ctrl+Tab works for me.
>>14855 >>>14854 > Ctrl+Tab works for me. Thanks!
>>14824 You can try the stuff in options->connection, which is all SOCKS4/5 iirc, but I'm dependant on what the python 'requests' library supports, which is not excellent. If you need per-program filtering, I think(?) most normal consumer VPN software will let you filter by app (it usually works on the actual executable path, just like Windows Firewall), so that's the best angle of attack, I think. Just as a side thing, I moved all my IRL personal stuff to an old ~$250 NUC PC + old monitor a year or two ago and it has been great. My browsing/hydrus/fun machine is all anon, my dev machine is hydrus, and all my 'real' stuff is isolated and only logs on to do email and Amazon etc.. a few times a week. No worries about accidental leaks or my normal browsing getting cookie'd-up with my IRL life, and things like VPN filtering is simple. And you obviously only need a simple office-tier PC. It would be a decent chunk of money if you bought it new, but if you have an old PC or laptop, I'd recommend thinking about it. >>14828 Yeah, I am sorry to say that if you are new to things like regex--and I presume things like HTML and javascript--my parsing stuff may be tricky to pick up. I'm still happy to answer any specific questions you have. By the way, I've always been pretty terrible at regex myself--I just don't have the right brain for it--but ChatGPT eats it for breakfast. You can just paste a regex and ask what it does, or describe the sort of regex you want, and it'll hold your hand as you figure it out together. In the last year or so I've become much better at regex just because I have that crutch available. >>14829 Thanks, this will be fixed in v567! Same for url class names. >>14831 >>14832 >>14835 Sorry for the trouble. The guy who figured this out to begin with intends to improve support. The 'fix', if it works for you, is to disable 'Use Native MenuBar (if available)' under options->gui, although by the sounds of it, that should already be disabled for you?!? Let me know if this is true: if you get this behaviour even with this setting off, that means we are still breaking things even when the 'global' code isn't supposed to run at all. Also, I hope to improve the command palette (ctrl+p by default) to show menubar items even with advanced mode off very soon, so we have a better emergency backstop here. >>14839 Thank you, I will update the defaults! Please note that if you edit a default, and I roll out an update, my changes will overwrite yours! If you make an edit to a default, remember to rename your edit, and ensure it is linked up to your rename in manage url class links. >>14841 Looks great, well done! As I've been saying about regex and other stuff recently, although it sounds cliche, ChatGPT can be a great tool to learn this stuff, or to just ask what the hell you are doing wrong in some place. I used to hate writing batch scripts because I would forget the damn syntax and not want to go through old docs trying to find the right escape character or whatever, but now it is much easier, and I have a bunch of things that ask me several questions and then fire off an 'encode my vidya capture' job to ffmpeg or whatever. I can just say 'hey, please write me a template that iterates through all the files in a folder called blah and does this to them', and the trudge-work is done. It makes mistakes, but if you know nothing about how to do some simple coding thing, it'll know a lot more than you.
I'm having computer problems and my dev machine blue-screened right at the end of this, and, praise the devs, 8chan saved my post. >>14842 Thanks for your patience. Good news is things seem fine. I'm using the current 567 code on my IRL client now and there don't seem to be any worries for anyone with a normal client, but it looks like certain advanced users who have some crazy URL Classes set up will run into some errors. No worries, I'll have it fixed for next Wed. >>14843 Yep, that sounds great. I do it manually, but whatever works for you. >>14844 I know nothing about it, I'm afraid. 13954530 still redirects me, too. >>14845 I can't quite parse your exact problem here, but yeah I think you can do what you want here. Hit up tags->manage where tag siblings and parents apply and you'll see how it generally works (warning: it is complicated). But if you want to sibling 'safe' on one service to 'rating:safe' or whatever using siblings on a different service, you can do it. >>14846 Thanks, interesting. Although it is lame of me to wipe my hands clean, I generally have no control over this--I just say 'hey Qt, set this stylesheet please!'--so when there are weird little per-widget bugs like this, I have to blame it on the window manager or Qt or whatever. We are hoping to have improved light/dark mode support in future, and I was also reading that the newer 6.7 version of Qt is going to have stylesheets that define both light/dark mode colours, if I understood it correct, and it'll change based on the OS mode changing. We'll see how it works out IRL. >>14848 >>14849 >>14850 >>14851 >>14853 >>14855 If you are copying cookies from hydrus companion, CF in its stricter modes needs the User-Agent to be the same as the one that got your cookies. Hydrus Companion supports this afaik, so poke around the options to see if you can send your User-Agent too. I know that has magically fixed some people in the past, but yeah unfortunately the internet is becoming less easy to access in many ways. The recent danbooru problems we had were overcome if you used HTTP 2.0 (we are on 1.1), so some of this will also be helped simply by updating our tech to newer libraries etc... (although in the danbooru example, that was likely an accident rather than by very purposeful design). Although, ha ha, I'm enough of a dinosaur that when we were talking about this, I was 'wow, do we really have to update? 2.0 is new!', and it turns out browsers are already using 3.0. The big corps are running away with the standards bodies, and everything is slowly pivoting towards OAuth-style tech. Oh well. The idea of perhaps one day using a headless browser as a web driver is not out of the question. We'll see how these things go. You'd think all the cheap hardware and bandwidth we have in the 2020s would allow things to be less restricted, but I guess that isn't how these guys think about things. That said, I do know that AI scraping has been hell for some sites (like danbooru) just recently. Speaking of, this general series of conversations has made me realise, I think, that we are about to get super-fucked by website filters in the coming years as AI agents start browsing at large scale. Remember that AI is going to make captcha obsolete, and when that happens, every tech that relies on it as an anti-bot filter is going to have to be replaced by an alternative, and all I can think of, to my own horror, is various sorts of Real ID. There will be efforts to launch nice open source or whatever versions of these things, but I fear the corporate bullshit versions will win, and then everything online will be government-tied iphones and fingerprints. Sounds like a good time to for some software that'll keep local copies of useful things safe, but how we will navigate the internet and download new content in a world of a million mumbai-based video-gen trainers and silky-voiced auto-scammers all thirsting for fresh content, I do not know. Fingers crossed it will be fun at least. >>14854 Also try hitting arrow up/down on an empty tag input. It should change services. Page Up/Down in the media viewer's tag manager will change the underlying media.
>>14858 >If you are copying cookies from hydrus companion, CF in its stricter modes needs the User-Agent to be the same as the one that got your cookies. I don't use hydrus companion as it doesn't support my browser apparently, but I tried copying cookies either by hand from the developer menu storage tab, or by downloading cookies.txt using some addon. I also tried setting up an http header user-agent by pasting what I got from https://www.whatismybrowser.com/detect/what-is-my-user-agent and nothing worked. >I can't quite parse your exact problem here, but yeah I think you can do what you want here. I don't really have a problem, I was just trying to give an advice to the other guy. Though I didn't know you could apply parent/sibling rules to other services. That sounds handy.
>>14858 >Yep, that sounds great. I do it manually, but whatever works for you. I changed the checker setting for subscriptions, then went and manually rechecked a dead one, and the dead subscription message still said "fewer than 1 file(s) in 180 days". Is it just a default message that matches the default settings, or is there some error and subscriptions are still dying at 180 days?
>>14858 >>>14854 >Also try hitting arrow up/down on an empty tag input. It should change services. Thanks. > Page Up/Down in the media viewer's tag manager will change the underlying media. That's useful, but not when the tag manager is covering the distinct part of the image!
>>14762 >options->sort/collect now has four different default tag sort widgets! You can set the default tag sort for search pages, the media viewer, and the manage tags dialogs launched off them. Noticed that the OR dialog doesn't have a way to sort tags or an option for it in the settings for that matter. Would it be hard to add?
>>14863 Wait, that's for the autocomplete, my bad. In that case, is there a way to group the results by namespace?
Does anyone have a way to download NSFW images from https://purplesmart.ai ? You log in using your Discord account, and something adds auth data to the image URL.
>>14865 >You log in using your Discord account Making downloaders is beyond me, but wow. Fuck that. Discord has a lot of privacy issues on it's own (i.e. no privacy at all), let alone Discord as a way to authenticate for porn. Maybe try https://aibooru.online/ as an alternative. Sadly purplesmart.ai looks to have a lot of good content, but that's way too high a price to pay for it.
Is there any reason the "pan file" shortcuts functions like a flight sim? Left and right will scroll a file left and right, but up and down are reversed and you have to manually change them for them to behave intuitively.
v566, linux, source OSError [Errno 36] File name too long: '/mnt/zzzzzzzzzzzzz/yy/sfw-mlp-archive/ 06c824e5ffccd303c76988280462b104043b08753a270c20790bd576f34bae06 alicorn pony, alicorn trixie lulamoon, _mlp_, trixie lulamoon, thread:_chag_, trixicorn fridge, fridge, looking inside, machine learning generated, ai gen.jpg.my first tags that were pending.txt' Traceback (most recent call last): File "/mnt/xxxxxxxxx/yy/hydrus-566/hydrus/client/exporting/ClientExportingFiles.py", line 806, in DoWork self._DoExport( job_status ) File "/mnt/xxxxxxxxx/yy/hydrus-566/hydrus/client/exporting/ClientExportingFiles.py", line 643, in _DoExport metadata_router.Work( media_result, dest_path ) File "/mnt/xxxxxxxxx/yy/hydrus-566/hydrus/client/metadata/ClientMetadataMigration.py", line 193, in Work self._exporter.Export( file_path, rows ) File "/mnt/xxxxxxxxx/yy/hydrus-566/hydrus/client/metadata/ClientMetadataMigrationExporters.py", line 692, in Export with open( path, 'w', encoding = 'utf-8' ) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 36] File name too long: '/mnt/zzzzzzzzzzzzz/yy/sfw-mlp-archive/ 06c824e5ffccd303c76988280462b104043b08753a270c20790bd576f34bae06 alicorn pony, alicorn trixie lulamoon, _mlp_, trixie lulamoon, thread:_chag_, trixicorn fridge, fridge, looking inside, machine learning generated, ai gen.jpg.my first tags that were pending.txt'
>>14866 The images at purplesmart.ai are generated using a bot on Discord.
>>14858 >The recent danbooru problems we had were overcome if you used HTTP 2.0 (we are on 1.1), so some of this will also be helped simply by updating our tech to newer libraries etc... Have you looked into using pyCURL? I don't know if it's a good fit for Hydrus but I really like that because you automatically get access to everything libCURL supports. It's a bit of a low-level library though but you can do everything Curl can. It's very fast too because it's a native C module unlike requests. >Want to make a simple request with alt-svc cache automatically upgrading the connection to HTTP 2 or 3 in the future? Well you can and you don't have to worry about the details. You only have to care about the pyCURL API to libcurl. I think replacing requests would be difficult though but maybe pycurl-requests could work (I don't know how mature that is): https://pypi.org/project/pycurl-requests/
>>14848 >>14849 >>14851 >>14853 >>14858 >>14859 I made it work. Turns out I also needed to open the api link (which I redirect to) in my browser, which triggers its own cloudflare check, after which I copied cookies again along with adding user-agent header.
I have a borked file here, feh loads it fine. It displays in the file upload dialog on here as well. https://danbooru.donmai.us/posts/4306140 https://gelbooru.com/index.php?id=5816146&page=post&s=view
>>14873 Me again, looking at Danbooru it's even tagged as "corrupted image" is Hydrus/PIL more of a stickler than other image loader libaries? Again, it works with feh, which uses imlib2 IIRC, and whatever Palemoon uses also works, so I'm not really sure what's going on here.
(58.06 KB 1280x720 girl - idea.jpg)

>>14857 >By the way, I've always been pretty terrible at regex myself--I just don't have the right brain for it--but ChatGPT eats it for breakfast. >that crutch available. That's a great idea. I can pester and abuse those canned algorithms non-stop. Let's see if I can learn something. Thanks anon.
I'm considering hosting Hydrus, I'd like to ask how easy it is to add text metadata to memes. I'd like to be able to search through memes by their text, is there a way to extend Hydrus with OCR? If not, can I at least add the text manually?
>>14876 Adding text metadata is easy, just use notes (default shortcut key is F2). As long as whatever OCR software you use lets you copy, you can always copy and paste the OCR plaintext results into the notes for the image. Unfortunately, there does not seem to be a way to search the actual contents of notes yet (or if there is it's not obvious to me). If notes were searchable they would be perfect for what you're looking for. Worst case scenario you can manually tag keywords from the dialog and/or the type of meme. Some memes are prevalent enough in the PTR to have already have a specific tag ("series:pogchamp" for example). I often use notes for translating and being able to search notes content would be pretty useful for me. Is there some way to do that I'm overlooking? If not, please consider searchable notes as a feature request Hydev.
I had a great week. I tightened up the URL storage/handling improvements that I was not confident in last week, so I am very happy to put out the release tomorrow. There are also advanced new tools for downloader makers for handling 'ephemeral token' parameters along with new quality of life UI in the manage URL Classes dialog. For normal users, there are also several bug fixes, file handling improvements, and a couple little things in system predicates, emoji tag presentation, and reversed tags in the new incremental tagger. The release should be as normal tomorrow. >>14860 I'm sorry to say it works on my machine, pic related. I couldn't find a forced 'no more than 180 days' rule, although you having said it maybe sparked a memory that I once had something like that, so I did give it a look. One thing is some of this text is kind of remembered from the previous calculation, so you sometimes have to force an update by open/applying the 'checker options' button, just to trigger a refresh. There's some asynchronous loading stuff happening in the background here, which makes some of the calculation update annoying to do on my end (the other row, the 'will recalculate when next fully loaded' bit in my pic is actually part of that). Another thing I did notice is the yyy time in 'xxx files in yyy' here might be the minimum of the maximum DEAD time and the oldest file in the subscription (I noticed when I first did a test here that it said like '1 file in 26 seconds', since it had just checked a new file), so if the oldest file in your sub has a source time of 180 days ago, that might explain it?! Let me know how you get on!
Is there a way to prioritize old urls over new links those urls generate in the url downloader? This is somewhat related to the archive thread downloader and cloudflare I talked about earlier. The archive cookies don't last very long and each thread generates 50+ links to another website (catbox), so it would make more sense to go through all the threads first to grab urls and then download the images. I think this would also work for images hosted in the archive as I could download raw image links without refreshing the cookie. >>14789 Color outlines (and thumbnail backgrounds) for rating services (favorites) would be nice too. Maybe even have the ability to put the service icon in a thumbnail corner like with the tags if it's set. Just now I was looking for new favorites, and thought it would be handy to have existing favorites highlighted just to see if I accidentally forgot to favorite a whole set of related images etc.
>>14878 >One thing is some of this text is kind of remembered from the previous calculation, so you sometimes have to force an update by open/applying the 'checker options' button, just to trigger a refresh This did it. The subscription options under downloading options appear to only apply to newly made subscriptions. Anything made before I have to manually edit the checker options on the manage subscriptions window. Thankfully, mass edits are allowed, so I just set everything to 730 days at once.
https://www.youtube.com/watch?v=PgqmqeH0iDs windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v568/Hydrus.Network.568.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v568/Hydrus.Network.568.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v568/Hydrus.Network.568.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v568/Hydrus.Network.568.-.Linux.-.Executable.tar.zst I had a good couple of weeks mostly figuring out better behind-the-scenes URL handling. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html urls For normal users, tl;dr: URLs better now, you don't have to do anything. I made a dumb mistake when I first created the downloader engine in how I handle URLs behind the scenes, and today it is fixed. You don't have to do anything, and you shouldn't see any big changes, but in general, URLs and query texts with unusual characters should work better now. You may also notice a URL in the file log or search log will appear one way, e.g. with japanese kanji, but when you copy it to clipboard, it is all encoded to %EF kind of stuff--your browser address bar probably works the same way. It should just mean things copy between your hydrus and other programs with fewer hiccups. If you regularly use some very odd URLs or downloaders, there might be a hiccup or two. A file that relies of strange encoded characters in its known urls might redownload one time if a subscription hits it, or one extremely strange query text (it'd be a single tag query that has a % character somewhere in the middle) might need to be renamed to %25. If you have a crazy custom downloader that relies on %-encoding tech, please pause it before you update, and then do a test afterwards--it may be the hack you added for the old system is no longer needed. In any case, let me know how you get on, and if we run into a problem with a certain downloader, we'll fix it together. Overall, I'm hoping that working with encoded URLs exclusively will make more complicated downloaders easier to figure out in future, rather than having to fight with odd characters all the time. I decided to cancel this release last week because I wasn't confident on how I was handling the advanced edge cases of encoding here. I am happy I did, because while all the 'just download from a booru' style downloaders (like the defaults) were fine, the most experimental downloader situations (that e.g. needed encoded parameter keys) needed more work. If you are a downloader maker, then you will be in the guts of these changes, so please check out the changelog. There's new UI in 'manage url classes'--path components and parameters now have their own edit panels with linked text boxes that show the normal and encoded values of the stuff you are entering, and there's also new 'ephemeral token' tech that lets you decide in what ways undefined parameters should be clipped before the URL being sent to the server. The idea of the 'request URL' being different to the 'normalised URL' is broadly elevated and exposed across the program. other highlights I also added a 'tag in reverse' checkbox to the new 'manage tags' 'incremental tagger' thing. It adds tags like 5, 4, 3, 2, 1 instead of 1, 2, 3, 4, 5. And all new system:url predicates have new labels. They are all a bit simpler, and they should copy/paste into the system predicate parsing system. All existing system:url predicates still have their old labels, so if this is a big deal, you'll want to recreate them and re-save your session/search. Thanks to a user, the new docx, xlsx, and pptx file formats get some nicer thumbnails and a little metadata. It should all recalculate soon after update. The Client API is now more careful about which files you can undelete, and it also now lets you clear file deletion records. next week I want to put proper time into getting a 'future build' working. Last time I tried, I ran into some technical problems related to the newer libraries I wanted to bundle, so I'll see if I can fix it all and have a test release for people to try. Otherwise, I just want to clear out some small jobs that this URL work boshed.
>>14881 hey got an error updating from 566 to 568, instant crash on startup v568, 2024-03-27 19:40:59: boot error v568, 2024-03-27 19:40:59: A serious error occurred while trying to start the program. The error will be shown next in a window. More information may have been written to client.log. v568, 2024-03-27 19:41:05: boot error v568, 2024-03-27 19:41:05: Traceback (most recent call last): File "D:\hydrus\hydrus\client\ClientController.py", line 2219, in THREADBootEverything self.InitView() File "D:\hydrus\hydrus\client\ClientController.py", line 1320, in InitView self.CallBlockingToQt( self._splash, qt_code_style ) File "D:\hydrus\hydrus\client\ClientController.py", line 474, in CallBlockingToQt raise e File "D:\hydrus\hydrus\client\ClientController.py", line 420, in qt_code result = func( *args, **kwargs ) File "D:\hydrus\hydrus\client\ClientController.py", line 1285, in qt_code_style ClientGUIStyle.InitialiseDefaults() File "D:\hydrus\hydrus\client\gui\ClientGUIStyle.py", line 70, in InitialiseDefaults ORIGINAL_STYLE_NAME = QW.QApplication.instance().style().name() AttributeError: 'PySide2.QtWidgets.QCommonStyle' object has no attribute 'name'
>>14882 Thank you for this report, and sorry for the trouble! You are running from source in the older Qt 5 (PySide2), right? This slipped through testing. Since you are running from source, I just committed a fix to master, so please git pull again and see how it goes. It works here when I run on PyQt5, but I don't have a PySide2 environment right now, and it seems I can't easily get it for newer python, so please let me know how it works out. I'll make sure I update my testing regime to catch this stuff better in future.
>>14883 Different anon reporting. I checked my system and I'm using Qt v6.5.2. I went to Github and I got an earlier source code (not patched yet) and I updated Hydrus with it. It went fine with no incidents. So as you said, the trouble is related to Qt5.
>>14785 Thanks and rip, the fuckers rejiggled all their pages YET AGAIN. First page returns 4+10 file pages instead and 0 gallery pages, subsequent pages return nothing. At least file page parser still works for manual importing.
This might seem like a weird question, but can a deliberately malformed parser force a specific file (not post) url pattern to always return error on download attempt? I need this to work around a site that redirects to "error" image with http code 200 instead of proper error codes.
>>14889 Adding to that, veto system as described in documentation would not work because broken post pages have direct download links indistinguishable from valid ones, being redirected (code 302 "found") to error image only on download attempt.
>>14888 Can confirm, having this behavior on my end as well. I guess I should've figured that 2 days where I could just download any query as temporary, shame.
I'm still hoping for some way to manage large amounts of subscriptions. I'm not asking for any effective combination of the features of separate subscriptions like making them all publish to the same label, just a way to arbitrarily and manually group them in the subscriptions management window, similar to making a page of pages, so things are easier to navigate.
>>14883 Ok that worked.
>>14772 sorry this took a bit of time, I had things I wanted to finish before I kept doing stuff on hydrus, I did an integrity check, however either me screwing up or not giving it enough time, It stopped doing anything but spit back no reports (taskmanager said it was doing nothing, not just the sqlite doing nothing) I re did it with a quick scan and it came back ok, if you want I can do the full one again next time I plan to sleep for a while. I also trimmed down the session to nothing to try and use the tools in program to no success, (I have been having a problem for a while where going between tabs would crash the program, hoping trimming would fix it, at 105 pages and 12million weight, it did not, so I saved the session off and reloaded it hoping maybe something was wrong that kept reloading wrong and reloading from saved session would fix it, no idea, will probably know in a few hours) but that resulted in the same issue.
>>14891 Fixing alphanumeric parsers was easy, only two class names changed. The same fix did not work for md5 regexp though, and I don't have time to look into it further for now.
>>14895 got an update to the locking up to the locking up, still happens, i'm not sure what update introduced this, but it constantly happens on a larger session moving in-between tabs, primarily when subs update and I open them to their own windows to see whats new.
>>14864 I would really like to add better 'grouping tech' to the tag list in general. I'd like to have a [+] expand button by namespaces, somehow, so you can collapse/expand guff like 'page:' tags. Unfortunately, it will take a bunch of work because this control is an entirely custom widget with its own rendering tech and stuff. Maybe one day it gets rewritten to a Qt list widget that'll have that tech implicit and easier to add, or maybe I just knuckle down and figure it out myself. That said, having it in an autocomplete results list (which nonetheless uses the same tech) is an idea I haven't heard before. It sorts only by decending count. Given how these lists are often spilling over with hundreds of long-tail low-count results at the bottom of the list, I am not sure if grouping by namespace would be super helpful (since you'd have to scroll through all those (1) results to get from 'character' to 'creator' or whatever), but you can filter the results--maybe that does what you want? Just literally add the namespace to your input text, like 'character:sam', and you'll get 'character:samus aran' and any other 'sam-' character, with no 'series:' tags or unnamespaced or anything. If you are feeling cheeky, you can sometimes do 'char*:sam' too, to save some time on longer namespaces. >>14865 I don't know anything about this site, but I know a guy is working on the latest version of a discord downloader using the new 'ephemeral token' tech I added last week. Might be once that is figured out, any sites that rely on some weird discord token in the URL become a bit easier, but I can't say confidently. Best answer though, would probably be to download those files with a different program or a custom script and then import to hydrus via the Client API. Some modern OAuth-style tech is just beyond my downloader engine unless you are willing to jump through five different hoops. >>14867 I think I might be misunderstanding. On a fresh client, pic related are the defaults, and if I hit shift+up, the image moves up in the same way that when I press left, the image moves left. Do you get different, or does it just feel different? Also, as a side thing, when you play fps vidya, do you play with 'up is up' or airplane controls. I am always, without exception, airplane. Up is down and down is up in my head, so it is interesting you noted this, since here up is up, and that feels intuitive to me in this context. Do you feel like you are moving the image or moving the background?.
>>14869 Damn, I have tried to fix this about three times now, and last time I gave it much more padding. I wonder if the hash is somehow screwing with my text length estimate here, and/or the longer sidecar suffix. I will give it another go, sorry for the trouble! >>14871 I haven't looked into pyCURL, but I've been recommended and currently plan to move to 'httpx', here: https://www.python-httpx.org/ This is supposed to be a drop-in for requests with one caveat: it doesn't do 3XX redirects automatically. So, I need to figure that out (which is tricky for some annoying hydrus-internal network engine tech) and then, fingers crossed, we can just roll it in with a bit of testing. I forget the exact post, but someone found a post from the python guys (who it seems worked on requests) saying 'requests is feature complete, we are not working on HTTP 2.0', so I guess a lot of python programs (and kids learning python) are going to run into this unless httpx or another library pulls a PIL/Pillow situation and just becomes the natural and better successor. >>14872 Thanks, interesting! I'll try and remember that, I guess the API is on a different domain, so it needed its own bullshit. >>14873 >>14874 Thank you! Broken images are always useful to test with. I get the same failed result. As you say PIL is a little less forgiving than other libraries, or at least I have it set up that way. I used to allow truncated-- EDIT: Yeah, if you turn on help->debug->data actions->enable truncated image loading, the file imports and displays. (It'd cause errors if you restart though, since the debug thing will reset) Yeah so I used to allow truncated image loading, but I think we ran into some issues where certain truncated images were crashing Linux clients when they hit OpenCV. I'm sure that bug is long-gone now, and we are moving to remove OpenCV entirely in the coming months, so I think I'm more willing to turn this mode on permanently, or at least upgrade it from a debug thing to a real option. I'll give this some thought. When you get a truncated image like this, you usually see a grey bar at the bottom, for the missing data, but this image seems fine. This is the first truncated png I specifically remember, so maybe this is just some weird png metadata frame that's fucked up, and not the image data.
>>14879 >Is there a way to prioritize old urls over new links those urls generate in the url downloader? Sorry, there isn't good support for sorting URLs yet. Nor is there 'hit this within 45 mins because it has a token that'll run out', but the recent ephemeral token tech is a step in that direction. Best solution for these situations is to tell the human to babysit the download, riding the search play/pause and setting generous bandwidth limits or whatever so it doesn't overwhelm the file log. >>14889 >>14890 Unfortunately there isn't a good solution here. I know exactly the sort of problem you are talking about. I've talked with others about having a 'bad URL store' that downloads can consult to see if they are running into one of these, and that sounds like a decent idea, but we'd also have to figure out what to do afterwards. Do we retry the download? Tell the user to login? It'll probably not be a nice simple system with a nice clean answer. (Also, as I say just above in regards to httpx, I'm going to have to implement my own redirect tech soon, so maybe we'll get more tools around redirect URLs, which are currently, generally invisible to the client). >>14892 Having groups/folders is a nice idea. Best answer I can offer right now is to try and merge your subs along site lines (e.g. subs called 'gelbooru artists', 'gelbooru misc tags', 'danbooru artists'), going for site-based subscriptions, and not trying for artist-based subs. >>14895 >>14897 I am sorry that I forget some of your issue here, but you are getting crashes when moving between tabs, that's no good. That suggests an unhappy environment (i.e. the dlls and executable and stuff running the program). If you are not already running from source, that is often a good way to improve stability for a client that is crashing due to UI weirdness. You are obviously technically competent, and it isn't too difficult to figure out these days, right here: https://hydrusnetwork.github.io/hydrus/running_from_source.html If you are already running from source, that's a shame. If the crashes/lockups happen more on heavier sessions, and when moving between tabs, and when subs update, that suggests that there are big delays in calculating stuff like the taglist. This happens when you switch to a page you haven't looked at in a while, and it happens when a subscription publishes a new file to a page or adds tags to a file that is currently in view. You've said trimming the client down to nothing doesn't fix your issues, so there is obviously more, unfortunately, going on here, but keeping things slender should help at least. I don't suppose you have a slightly older computer, do you? Or anything odd with your hard drive? I've known some OSes completely sperg out when the system drive has had problems and it sets like a 'dirty bit' on the drive, and then randomly data will get 6000ms latency when reading/writing to it. This sort of 'oh I am fucked' OS situation could cause lag in a hydrus client that wanted to hit the hard drive pretty hard doing subscription and file metadata work, but you know better than me.
>>14898 >and if I hit shift+up, the image moves up in the same way that when I press left It moved the opposite for me, and I had to reverse it, shortly followed by also changing the controls to the scroll wheel for ease of use. Pic related are my current settings.
>>14898 >>>14865 I believe they fixed it soon after I posted that. This downloader now downloads the actual version (without descriptions). [58, "purplesmart.ai", 2, ["purplesmart.ai", "fc29b6c4cc9af883d65fc8567fd789c3afb3d568ee181b61e6b4c6caa7b59ecd", [55, 1, [[], "example string"]], [], [26, 3, [[2, [30, 7, ["og:image", 7, [27, 7, [[26, 3, [[2, [62, 3, [0, "meta", {"property": "og:image"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "content", [84, 1, [26, 3, []]]]], [7, 50]]]]]], ["https://purplesmart.ai/item/740ab221-a882-4ad9-bf46-fd3d60c503b9"], {"url": "https://purplesmart.ai/item/740ab221-a882-4ad9-bf46-fd3d60c503b9", "post_index": "1"}]]
>>14898 >Given how these lists are often spilling over with hundreds of long-tail low-count results at the bottom of the list, I am not sure if grouping by namespace would be super helpful Yeah, it's a bit of a situational case and I realized that it wouldn't be very beneficial in the majority of cases after I posted. The situation was that I wanted to check what tags containing a searched word exist so I could do a precise OR search among a ton of ai generated images, since doing a wildcard search would turn up stuff I didn't want to filter thanks to the variable nature of prompts and since this searched word would also appear in various namespaces (like model names and such), grouping the results by namespace would make going through the list a bit easier. Basically it's about finding out what tags exist instead of straight up filtering images. >>14899 >Thanks, interesting! I'll try and remember that, I guess the API is on a different domain, so it needed its own bullshit. Nope, same domain, but a separate cookie for the api I guess. Here's an example of a normal thread link and the api I redirect to. >https://archiveofsins.com/h/thread/7750375 >https://archiveofsins.com/_/api/chan/thread/?board=h&num=7750375 >>14900 >Best solution for these situations is to tell the human to babysit the download, riding the search play/pause and setting generous bandwidth limits or whatever so it doesn't overwhelm the file log. That's what I did in the end, letting it play for one url and then pausing and then setting the new urls as skipped, then after I went through every thread url, I unskipped the pics. Took a while to get through, but better than everything just failing after a while.
"share" contains "export" and "open" contains "similar files" in the Media Page's menu, but not in the media viewer's menu. I just guessed that you can copy a URL from the "manage urls" list by clicking it and pressing Ctrl+c. I would much prefer "add to" and "move" to be directly in the context menu, not behind the "local services" menu. If a menu contains only one item, is it possible to replace the menu with that item?
Welp, it seems Visuabusters has overhauled their booru and it runs entirely off Javascript now, F. Looks like it runs on Szurubooru, which I don't think has a downloader.
Can we get a shortcut option for "set this file as better than the other one selected". It's under a lot of right click submenus and I'm using it over and over for mostly updating to superior versions of video files, which the duplicate filter cannot handle. I know the command goes off of what you right clicked in a selection, so the shortcut could just go off of the last file you selected.
>>14909 file > shortcuts > media actions, either thumbails or the viewer and set a shortcut for "file relationships: the focused file is a better duplicate". (the focused file is your last selected file, aka whatever is showing in the preview window) >I'm using it over and over for mostly updating to superior versions of video files, which the duplicate filter cannot handle. set a shortcut for "file relationships: set files as potential duplicates". then you can send videos to the duplicate filter. it's still not really designed for videos, because the comparison at the side doesn't consider stuff like framerate, fps, etc., but it works just fine otherwise.
>>14910 >>14910 >file relationships: the focused file is a better duplicate Ah, I remember now. I thought I couldn't add new shortcuts not already in the list, just rebind them, because the shortcut management window's height requires scrolling down to see those options at the bottom, no matter how small the list of shortcuts is, with a ton of empty space in the middle, and I didn't notice the scroll bar.
>>14877 I see, thanks for your answer, searchable notes do seem like the thing I'm looking for. Of course often tagging the memes by the category of what they are would be enough, but I often remember the exact phrasing of the meme, so it would be really helpful. I'll add it as a feature request or maybe contribute myself!
I had a good week. I fixed a bunch of small issues, and I figured out the problems with the 'future' build, so I'll also have a special version for more advanced users to test out. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=rAbVhoeWcAs windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v569/Hydrus.Network.569.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v569/Hydrus.Network.569.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v569/Hydrus.Network.569.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v569/Hydrus.Network.569.-.Linux.-.Executable.tar.zst I had a good week mostly fixing small things. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I fixed some issues with last week's URL improvements. It is mostly advanced stuff, so if you are a downloader maker, please check the changelog. For regular users who download from certain sites that can produce multiple files from one post (I am told that Pixiv was not affected, but Inkbunny was), I am afraid I made a stupid mistake that meant only the first file of those posts was being downloaded. This is now fixed, so if this affected you and you have a bunch of subscriptions, let's fix it: once you are in v569, load your subscriptions up and sort their queries by 'last check time'. Select what has run in the last week, copy the query texts, and then paste them into a new gallery search page with the file limit set to something like 50. Your client will go over what was missed and fill in any gaps. I harmonised some search logic related to 'system:duration = 0', '<1', '!=0', and other variants around the zero edge case, for duration and the other simple predicates I updated a while ago. When I first wrote this, I tried to thread a needle of logic where in some cases it would return files with no duration and sometimes it would not, but it all ended up a mess, so, from now on, searching for any simple quantity that would include '0' will now include any file that has no value for that property. (e.g. an mp3 has no width, so now system:width < 200 will include mp3s). If you are a Linux user who uses a hydrus.desktop file, check the changelog! You'll probably want to rename it, and it might even fix your program icon. I fixed the 'remember last used default tag service in manage tag dialogs' checkbox and dropdown in options->search. It wasn't saving right, sorry for the trouble! future build I have a new 'Future Build' ready. This is a build the same as the one above, but it uses newer python and libraries. We do not know if those libraries cause any dll conflicts on any OSes, so if you are an advanced user and would like to help me test it, please check out this link: https://github.com/hydrusnetwork/hydrus/releases/tag/v569-future-build-1 next week More small jobs. If I can, I'd like to focus on some UI quality of life.
(26.60 KB 475x189 01.jpg)

(98.76 KB 972x964 02.jpg)

(315.18 KB 588x618 03.png)

>>14916 >This is a build the same as the one above, but it uses newer python and libraries Watch out anon, you might be out of the loop and not be aware that the xz v5.6.0 and v5.6.1 compression library was backdoored by a glowie. https://www.darkreading.com/vulnerabilities-threats/are-you-affected-by-the-backdoor-in-xz-utils
setup_venv installs hydrus-569/venv/lib/python3.11/site-packages/pillow.libs/liblzma-13fa198c.so.5.4 (a version predating the known backdoor) for me, just like the 3 previous versions.
>>14919 V5.4 is safe.
>>14919 Looks like Hydev isn't getting his bunghole bedazzled today.
>>14920 >>14919 >>14917 You should note the rightmost pic says they're suspicious of older versions too, given the glowie worked with them on the project for 2 years prior.
(525.51 KB 1366x768 original.png)


(1.20 MB 1366x768 Openbsd_5.1_with_compiz.png)

>>14922 I was reading and listening to many devs who looked into this shit. They all agree that this is not the work of a hobbyist and the skills needed to design the malware are top notch, even branding it 'brilliant'. Even today, almost a week after the discovery, they are not 100% sure what additional capabilities this backdoor has. So, paranoia is running wild. Given for example, that many corporations have been involved in the Linux kernel for many years, some are beginning to give a second look at serious alternatives like the solid rock OpenBSD operating system, including me. I think Hydrus can be installed from source in OpenBSD without a fuzz.
I just realized edit times has a cascading step function. This is amazing.
>>14924 >I think Hydrus can be installed from source in OpenBSD without a fuzz. >source code Nope, OpenBSD is a pain, FreeBSD is the right and easy one.
Is there a way to make it so inputting a tag that's already in the current search doesn't remove it? You can do it for the manage tags dialog, but I don't see a way to do it for the search bar.
>>14906 For looking up the tags, something hydrus has never had, and I've wanted for a long time but really haven't moved forward is a proper tag wiki, a la https://danbooru.donmai.us/tags . We just don't have a good 'let me look at and manage my tags' UI. It will be a lot of work, but there would be many benefits, and I could show siblings and parents and stuff in a unified workflow. Thanks. Now I look, I remember that CloudFlare has pretty clever rules for site owners to define which parts of their site should be protected by which system. I guess they have their API stuff under a different ruleset and/or there is some behind the scenes gateway redirect stuff that causes CF to see it as two sites somehow. I'll try and remember this in future. >>14907 Thanks. A lot of my 'media' thumbnail code is still hardcoded bullshit from years ago, but I am slowly moving to unified menu generation and dynamic layouts, like you are talking about, where I can show just the single result rather than burying it under yet another submenu. I'll give it a look and see what I can do. >>14908 If you are feeling clever, open up your browser's developer mode and see what the js is doing as you browse through a tag. In some cases, the js is actually hitting up a really nice open API like '/results.json?query=blonde_hair&num=50&offset=150'. In these cases, it actually becomes easier to parse the site, since everything is now in nice JSON. Unfortunately, many sites are moving to more obscure engines that wrap the js requests into dynamically named bullshit, or it is all wrapped in difficult access tokens so an external downloader can't talk to it without jumping through OAuth hoops or something, which hydrus can't do. EDIT: Yep, looks good: https://www.visuabusters.com/booru/api/posts/?offset=42&limit=42&fields=id%2CthumbnailUrl%2CcustomThumbnailUrl%2Ctype%2Csafety%2Cscore%2CfavoriteCount%2CcommentCount%2Ctags%2Cversion It'll just take a bit of work from a downloader maker to figure out a fix here, and if every Szurubooru works with the same API, this would hopefully be applicable elsewhere. >>14912 I keep meaning to add searchable note tech. IIRC, the text is all in the database in a fast-search format, I just have to write the pipeline down to it. Can't promise when it will happen, but I do want it.
>>14917 >>14922 >>14924 >>14929 >>14919 >>14920 >>14921 Yeah I was reading up on that story last week, pretty amazing stuff. Although it sucks, I have to agree with the various commentators who said this was probably not the first big time this has happened, just the first one that got caught, especially given the nanosecond-profiling serendipity of the guy who actually caught it. This might be the iceberg on infiltration throughout the open source space, and I fear what any of the (particularly the modern, overly bureacratic) open source committees are going to propose as 'solutions'. The best defence for the situation overall, I think, is that the corpo alternatives are run through with military intelligence infiltration themselves, and trojans are sometimes an explicit part of the business plan, ha ha ha, so it is really a question of lesser evils/risks. Also, mistakes tend to vastly outweigh malice, so it is probably best just to work on best practices, and the attacks will be caught in the same net. Everything is breaking all the time. Then again, there's a part of 'there but for the grace of God go I' in the xz story, since I'm another broadly-solo-dev who continually overloads himself. I hope I'm too spergmode to ever work closely with a glowie, if one wanted to volunteer/attack. If I ever couldn't put attentive time into this any more, I plan to just quit this hydev identity entirely and encourage forks, not cede soft control (I hope!). The good news--although at my high level stuff like lzma/xz is usually outside of my control--is that I shy away from going to the latest version of things, mostly in my case because jumping to the newest stuff tends to cause all sorts of conflicts either from bitrot in my shitty code or simple OS-obsolescence from a stubborn anon sticking to an old version of something. I get poked by this, for instance, when a source user who runs on the Arch package has a system package update and suddenly hydrus things don't work because the latest version of x doesn't want y flag called. Another example is when some enthusiastic user tries to run from source on Python 3.13 the week that is released and reports some super dense library like numpy or OpenCV doesn't build--you don't say!! So for me, updating shit is always an annoyance that I resent, rather than something I am enthused about. In general, ha ha, the complaints are that I am too slow. I still think of Python 3.11 as 'new', but I blinked and somehow the whole world moved on. In this future build I am testing, we are going up from Python 3.10 to 3.11, OpenCV 4.7.0.72 to 4.8.0.76, and Qt 6.5.2 to 6.6.0. These are all somewhat conservative, and we've been testing them for some months now on the source side, so I feel good about them. Once these updates are folded into the the main branch--which I expect in a few weeks assuming no reports of big problems--I'll shuffle the 'test' source numbers up a little, for Qt to either 6.6.3 or 6.7, which I understand is launching this week, and we'll restart the cycle. I am always cautious about this stuff, and believe strongly that slow and steady wins the race. >>14927 I just added incremental tagging a couple weeks ago, too, to 'manage tags'. I'm really happy with how both of these things worked out, so let me know how you get on with it all! >>14930 Great idea. It is about time that tag input got a 'cog' icon to govern some of its logic, and this would be a good place to start.
Anyone else running into trouble with the rule34.us subscriptions?
(1007.72 KB 500x281 Thank You.gif)

>>14934 >preventing code rot What a pain. Also it highlights the fact that software development is fundamentally broken. If the libraries are re-written all the time, devs are wasting most of their time chasing a moving target with no hope of ever catching up. All which comes back to the idea to develop absolutely everything in C with the ancient and proven C89 standard and trim dependencies to the bare minimum in order to assure the code will compile everywhere and every time in the future. Thank you so much for the effort devanon. It is really appreciated.
(71.38 KB 1240x597 hydrus e621 style.png)

I made an e621 themed stylesheet for the Hydrus Client. Is there any interest/chance to get this officially added? It consists of one QSS file and 23 SVG files. Pic related.
>>14947 Not bad, not bad.
I'm getting 403s when trying to download from exh/e-h after updating to 569.
I had a good week. I worked on simple bug fixes, code cleanup, and quality of life. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=Poii8JAbtng windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v570/Hydrus.Network.570.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v570/Hydrus.Network.570.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v570/Hydrus.Network.570.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v570/Hydrus.Network.570.-.Linux.-.Executable.tar.zst I had a good week clearing simple jobs. There's nothing too clever, just a bunch of little fixes and improvements all over. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights All the tooltips in the program now wrap into a neater multi-line block. No more super long single-line tooltips! The 'open' submenu when you right-click on the thumbnails or the media viewer has been cleaned up and plugged fully into the shortcuts system. It now looks and works exactly the same in either the thumbnail or media viewer view, and 'open file in web browser', 'open files in a new duplicates filter page', are added to the 'media' shortcut set. What was previously four 'show similar files: 0 (exact)' shortcut commands is now merged (and will be automatically updated in v570) into a single 'show similar files' command that lets you set any distance. I fixed a stupid thing in manage tag siblings/parents that was too-frequently making it so when you spammed a bunch of pairs at it in one go, it wasn't allowing deletes (it was activating the 'only add' functionality). Should be fixed now, and it'll ask to petition pairs amidst adding others, but let me know if this changes your regular workflow annoyingly. These dialogs are way overdue for a complete overhaul. I fixed some more URL encoding issues with some help from users. If you have a clever downloader that broke in the past couple of versions, sorry! Try it again, and let me know how it goes. next week More of this. I'll clean up the 'share' submenu like I just did the 'open' one, too. Last week's test 'future build' seems to have gone well, no significant issues reported, so I will integrate that into the main build. Windows and Linux users who use the extract builds will have special 'clean install' instructions for v571, but I'll explain it all super easy when we get there. >>14949 I am not familiar with that downloader, so I don't know for sure, but it might be my recent URL stuff screwed it up since it uses some clever characters in its requests. I've fixed some mistakes I made last week, but there may be more work to do on the downloader side (it might be the downloader was a little crazy beforehand to deal with the crazy way I was doing things back then). Let me know how you get on!
>>14970 >Should be fixed now, and it'll ask to petition pairs amidst adding others, but let me know if this changes your regular workflow annoyingly. These dialogs are way overdue for a complete overhaul. A couple of builds ago, I pasted something somewhere like that and had to dismiss a separate dialog for each item.
Thanks hydev, you are the best <3
It's failing to launch on Arch after python/qt6 packages updated to 6.7. Both 569 and 570 fail. Reverting back to pyside6-6.6.2-2 failed with other qt6 related errors further up the chain. It smells like this dependency hell will go all the way up to qt6-base 6.7 v570, 2024-04-13 14:42:31: hydrus client started v570, 2024-04-13 14:42:31: booting controllerโ€ฆ v570, 2024-04-13 14:42:31: There was an error trying to start the splash screen! v570, 2024-04-13 14:42:31: Traceback (most recent call last): File "/opt/hydrus/hydrus/client/ClientController.py", line 656, in CreateSplash self._splash = ClientGUISplash.FrameSplash( self, title, self.frame_splash_status ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hydrus/hydrus/client/gui/ClientGUISplash.py", line 249, in __init__ self._my_panel = FrameSplashPanel( self, self._controller, frame_splash_status ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hydrus/hydrus/client/gui/ClientGUISplash.py", line 67, in __init__ self.setLayout( vbox ) TypeError: 'PySide6.QtWidgets.QWidget.setLayout' called with wrong argument types: PySide6.QtWidgets.QWidget.setLayout(VBoxLayout) Supported signatures: PySide6.QtWidgets.QWidget.setLayout(PySide6.QtWidgets.QLayout) v570, 2024-04-13 14:42:31: hydrus client failed v570, 2024-04-13 14:42:31: Traceback (most recent call last): File "/opt/hydrus/hydrus/hydrus_client_boot.py", line 215, in boot controller.Run() File "/opt/hydrus/hydrus/client/ClientController.py", line 1746, in Run self.CreateSplash( 'hydrus client booting' ) File "/opt/hydrus/hydrus/client/ClientController.py", line 656, in CreateSplash self._splash = ClientGUISplash.FrameSplash( self, title, self.frame_splash_status ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hydrus/hydrus/client/gui/ClientGUISplash.py", line 249, in __init__ self._my_panel = FrameSplashPanel( self, self._controller, frame_splash_status ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hydrus/hydrus/client/gui/ClientGUISplash.py", line 67, in __init__ self.setLayout( vbox ) TypeError: 'PySide6.QtWidgets.QWidget.setLayout' called with wrong argument types: PySide6.QtWidgets.QWidget.setLayout(VBoxLayout) Supported signatures: PySide6.QtWidgets.QWidget.setLayout(PySide6.QtWidgets.QLayout) v570, 2024-04-13 14:42:31: hydrus client shut down
>>14947 neat
>>14977 >>14970 (forgot to (you)) Update: Reverting everything from 6.7.0 to 6.6.3 gets it working again.
When I'm going through the duplicate processor and replacing a bunch of old files from imagesets with higher quality versions, shouldn't a replaced file automatically get all the same alternate file relationships as the original had? I've noticed the duplicate processor is making me compare a lot of sets of files where both were already set as alternates in the original low quality set, and both have been replaced by higher quality versions. Shouldn't these comparisons be unnecessary?
>>14980 Oh, and the same goes for previously existing "not related" relationships. I'm betting I'm going through the same comparisons with the high quality versions of these images and random unrelated images as I went through when I first got the low quality versions, and I really ought not have to.
>>14947 This looks great! Can you upload/send me the files, and tell me how to do the SVGs? I presume the QSS gives relative paths or something, but I've never done a multi-file style before, so just tell me if I need to do anything clever. if they just go under a subdirectory called 'e621' or something, that's great. >>14972 Thanks, I'll check that out. >>14977 >>14979 I was talking with some guys about this earlier today and it seems to be either a fuck-up on the qtpy or PySide6 side: https://github.com/spyder-ide/qtpy/issues/477 https://bugreports.qt.io/browse/PYSIDE-2674 I have no fucking idea, but it seems some type definition is not being propagated correctly, so basically nothing will work. I do not think it is super ideal that Arch rolled out to the latest version of Qt within days of it coming out, but that seems to be the way it works. Unfortunately, the way AUR does things, as I understand, is that it uses your system python, so they are locked to this schedule. The hydrus anons who maintain the AUR package may have solved this already by switching to PyQt6, which does not seem to have the problem, even in their (currently dev branch) of 6.7.0. TBH, I think I am going to recommend people run from source as in my help here https://hydrusnetwork.github.io/hydrus/running_from_source.html rather than the AUR package (or at least caveat it in my help), since I simply do not have the time/resources to keep up with when something is going to update on the Arch side and do pre-tests--and in this case, it isn't even my bug to solve and there's nothing to do but manually tell your Arch to go back to 6.6.3. My own 'running from source' thing uses a venv and a requirements.txt to fix the version so you know it is all tested and stuff. Anyway, as I've said elsewhere in this thread I think, I'll be bumping my test versions up to 6.6.3 and trying out 6.7.x, which I imagine will be 6.7.1 or a new version of qtpy once this clusterfuck is figured out. Watch this space! >>14980 >>14981 iirc, some it can merge, some it cannot. I believe you are correct in that when a file becomes a dupe of another, it inherits all of its alternates, but within an alternate group, especially a large one, there needs to be some 'redundant' checking of certain internal pairs because some info cannot be inferred, so that may be what you are seeing. I'm trying to think of a clear example. Ok, so let's say we have three files. Two are already alternate: A1-A2 B1 We see A1, B1 in the filter and decide that B1 is an alternate of A1 (and let's say rename it to A3). We cannot assume that A3 is an alternate of A2, so the filter will remember all this info and serve us up A2-A3 a bit later on. Larger alternate groups can get more incestuous. It sounds like bullshit, but I wonder if what you are seeing is actually this? That the file you duped to would have been seeing these alternate 'confirmations' anyway, but since you added the dupe, it seems like you are seeing the same stuff roll by again? One of my regrets with the duplicate system is there is no good visualisation of current relationships, so debugging these more complicated cases is so difficult. And setting up a test group to reproduce a particular problem is a pain in the neck. I want a future overhaul of the system to have lines and arrows and stuff that update as you make changes. BTW, there are some (old) diagrams at the bottom of this page, if you haven't seen them before: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced They aren't super helpful, but since you are mapping this mentally, they might be interesting!?
I had to do a clean install of hydrus. I deleted everything except db folder and dropped in the new program files like instructed. The database updated when i started the client but nothing appears. Everything its gone. How fucked am i?
>>14982 >We cannot assume that A3 is an alternate of A2 That's because A3 still has the potential to be a duplicate of A2 if you designate A3 as alternate of A1. But my scenario is different. In my case, A3 is a higher quality duplicate of A1, not an alternate so I select that option, deleting A1 and effectively replacing it with A3. In this case, A3 should inherit all relationships of A1, but its not, and I'm having to compare A3 to A2 and mark them as alternates after having already replaced A1 with A3. This is compounded for larger alternate groups, and I can conceive of no possible scenario where this behavior is desirable, let alone necessary. >https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced Reading the whole page, I can see nothing of whether or not a new duplicate group "king" inherits the alternate/false positive relationships of the file(s) it effectively replaces, just file metadata and tags. Merging options also don't cover relationship inheritance, but I don't think they should as there's only one sensible way to go about it. Is this an oversight? If I replace a file with one that has slightly less artifacting, I can think of no possible scenario in which it should not inherit the now deleted lower quality file's alternate and/or false positive relationships with other files. >BTW, there are some (old) diagrams at the bottom of this page >More examples at the bottom As an aside, I'd like to point out that graph gives no explanation as to what blue/red and single/double lines mean for the file relationships. Might want to add a key for that.
>>14984 Nevermind, I think I figured why total relationship inheritance of duplicates isn't possible due to a certain edge case. The old and new file in a duplicate merge could have conflicting relationships. Say we have A, B, and C. A and C are set as alternates. B & C are set as false positives. If you make A & B duplicates with one better than the other, they would have to merge conflicting relationships because B and C can't both be alternates and false positives. Granted, this is only possible if one of the conflicting relationships is objectively wrong, but Hydrus has no way to know which is wrong, and I believe has no protocol for asking the user to determine which is correct upon seeing such a conflict. Without a such a protocol, the only ways to prevent a logic conflict would be either automatically choosing only one of the files to keep relationships from, or to automatically erase the relationships from both when the merge is done. I'm guessing Hydrus does the former in some attempt to preserve at least some of the information? I just got unlucky that sometimes it chose the file without any existing relationships, maybe? Or maybe it does the latter and wipes all relationship info, and I just hadn't run into higher quality versions of files from alternate sets before?
>>14985 On further thinking, there is a more ideal way to handle potential conflicts. If there is no conflict, keep all relations, if there is one, then Hydrus could choose randomly, choose the older file, or choose the newer file. The latter could be a customizable setting for the user, and I would be apt to make choosing the older file's relationships the default.
>>14983 load up your backup. ...you do have a backup, right?
Apologies if it's been answered already, I ctrl-f'd and couldn't find an answer, but how do I unhide tags? I accidentally hid one and I can't find the option to unhide in the GUI or docs
>>14743 Anon with the crashing hydrus here. Sorry about the late reply. >do you mean the program actually halts instantly and disappears Yes >What happens if you try to drag and drop a page tab onto firefox The client froze for a second and then closed. >Can you drag and drop page tabs to reorganise them inside hydrus Yes, this seems to work fine. >Can you drag and drop files from firefox onto hydrus I just tried with some images from danbooru and there were no crashes. It did import the images without any prompt however. I've also experienced crashes when dragging and dropping files from hydrus to my file manager, though it doesn't happen every time like with firefox.
>>14983 Sorry to hear about your problem. This happened to someone else this week too, so I have decided to write a proper help document on how to navigate this situation. Here is the first draft. Let me know how you get on and if anything was confusing or badly written or simply did not cover your situation. https://files.catbox.moe/1g3vcm.txt
>>14990 Seems like i'm screwed. The DB files seem to have been overwritten. No backup of course even though i have a NAS with plenty of space because i'm an goddamn idiot. At least the files are still there. What is odd is that when i booted the fresh client it was updating my database from the older version to 570 like normal but then get the "This looks like the first time you have run the program" when it was done.
>>14991 Damn, I'm sorry to say I know how you feel. If and when you want to set up a new client, your media files are the 256 'fxx' directories in "install_dir/db/client_files". The 'txx' stuff is your thumbnails and can be ignored. That you saw an update followed by an empty db is very odd. You only get that popup if the actual client.db file does not exist on boot. Assuming you have a 'client - 2024-04.log', you might want to look through it, since that stuff should all be logged. I presume that will have been overwritten too, so it'll be the log of the newer db file. You know what to do, but when you have some time, check this out: https://hydrusnetwork.github.io/hydrus/after_disaster.html >>14988 tags->manage tag display and search. 'Single file views' is basically the media viewer, 'multiple file views' is the 'selection tags' box when you are looking at the thumbnail grid.
>>14900 ok, i'm trying to install from source, I installed python 3.11, I did the Add Python to PATH, and from there I have absolutely no clue what to do, I am at this step >Then, get the hydrus source. It is best to get it with Git: make a new folder somewhere, open a terminal in it, and then enter: I have no what the open a terminal means in this context, as I see nothing to do it from within the folder, I opened what I think is python, the 3.11 interrupter, and tried to get it to point to a location but I just get syntax errors. while I am not computer illiterate and can use command line if required, I am also an idiot. now, reading ahead it seems like I need sqlite3, now am I to understand this is a part of the git pull when I figure out how to get that to work? I also got c option for mpv because it said ideally, I pulled the libmpv-2.dll and renamed it to mpv-2.dll, that was all that was needed? I also took out the ffmpeg.exe from the ffmpeg full build, that was all that's needed for that? I am also assuming that clicking the "setup_venv.bat" on the git right now shows me the script that will run, and I am at a near complete loss when looking at it. the install directory will be C:\Hydrus I am on windows 10, are there any other things I would need to change inside that bat to install? Thats where its currently installed, when im doing this stuff, I intend to back up hydrus to an f drive and rename the base directory to C:\HydrusOld while im doing this, in case this completely fails if it succeed, I then move the old db folder to the new one? in this case it would be C:\HydrusOld\Database\Hydrus Network\db and I assume at the same time move the thumbnails from C:\HydrusOld\Thumbnails to their same spot in the new one leaving actual file location untouched? so with that out of the way, the computer is not new, but its 'modern' amd 1700, 64gb ram, 970 2tb nvme, the thumbnails are on the nvme, the images are on a hdd though. I meant to post this a while ago but I think my post got eaten.
(9.94 KB 970x145 1.png)

(70.94 KB 1690x372 2.png)

(136.30 KB 2530x882 3.png)

Is Hentai Foundry login fixable? I tried parsing cookies via Hydrus Companion, I've tried editing login script to include requirement for PHPSESSID cookies, but it didn't work. Checked previous threads as far as 2022 and no one seems to mention. It broke for me somewhere mid 2023.
>>14993 No worries. If you are on normal Win 10, you will not have git, so let's install that first. In the 'running from source' help, check the 'Git for Windows' expanding box, pic related. Follow that pain in the ass install and you'll be set to do any git stuff in future, for any program. By 'open a terminal', I mean the normal command line. You will be telling the new program 'git' to download my code straight from the github repository. So, go to your C:\Hydrus and then shift+right-click on it and select 'open command line here' or 'open powershell' or whatever the option is for Win 10. The paste the 'git clone https://github.com/hydrusnetwork/hydrus' in and the directory will populate. Then continue with the help guide, which will tell you about the sqlite3 and mpv stuff. There's nothing special about Win 10, I think! Win 7 users have to do some special stuff, but you'll be fine with the 'easy/normal' choices as you go through the venv install and stuff. Your database migration plan looks good. As long as you have a backup, that's the most important thing. Yeah, just move/copy your four client*.db files from the old install_dir/db location to the new one. The source code install looks very similar to the built executable one, same 'db' dir etc... Move your files/thumbs to wherever is convenient, and when you boot the client, if it can't find your files/thumbs because the location has moved, it'll say 'I couldn't find xxx folders, can you point me to them', and you can fix it all there and then. Check database->move media files once you are booted in source to make sure everything is correct. Thanks for your feedback; I will update the help here to be more clear on git and the command line. I never liked how un-obvious that expanding box is.
(237.22 KB 1168x589 Screenshot_20240416_081759.jpg)

>>14991 Before ANY update a backup is a must. If you have no space or just wanna keep it simple, backing up the .DB files is enough to be safe. BTW, Hydrus has to be NOT running when those files are being copied.
Anybody else experiencing slow downloading from Pixiv lately?
>>14999 As a test I grabbed about 450 images and download speed was about 1.7MB/sec the whole time. Slow is relative, but that's about as fast as my internet can go, so it seems fine to me. If it's going slower than that then maybe you got throttled?
>>14998 I wish the db size didn't get so bloated when you use the PTR.
>>14970 >I am not familiar with that downloader, so I don't know for sure, but it might be my recent URL stuff screwed it up since it uses some clever characters in its requests. I've fixed some mistakes I made last week, but there may be more work to do on the downloader side (it might be the downloader was a little crazy beforehand to deal with the crazy way I was doing things back then). Let me know how you get on! It's working again. Thanks hydev.
Is there undo in the duplicate filter?
I had a mixed week. I did some boring stuff, some quality of life, and folded the 'future build' updates into the main release. Users who use the manual 'extract' releases will have special update instructions. The release should otherwise be as normal tomorrow.
(16.82 KB 524x162 hydrus-disk-usage.png)

Not sure if this is intentional or a bug (I'm using the Flatpak version on Linux Mint 21.3, so it may be Flatpak or even Mint specific). But deleting files using the Hydrus doesn't seem to work. Instead of deleting them, it seemingly just moves files to `~/.var/app/io.github.hydrusnetwork.hydrus/data/Trash/`. This obviously ends up taking a huge amount of disk space, pic related
Hi. One of my parsers recently started to get HTTP code 400/bad request. Both browser and curl have no problem downloading the very same URL. So is there any easy way to see the request that the parser is sending to the host?
>>15004 Try the middle mouse button.
https://www.youtube.com/watch?v=2umMp1VHWiA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v571/Hydrus.Network.571.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v571/Hydrus.Network.571.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v571/Hydrus.Network.571.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v571/Hydrus.Network.571.-.Linux.-.Executable.tar.zst I had a simple week working on some quality of life and background stuff. There are special install instructions this week! Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html new build tl;dr: If you use the Windows or Linux .zip or .tar.zst 'Extract' releases, you have to do a clean install! (https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs). If you are a Windows installer/macOS App/source user, you do not need to do a clean install; just update as normal. The future build test went well, so I am folding the updates into the main build. The above releases are updated from Python 3.10 to 3.11, and Qt (UI) and OpenCV (image processing) are moved to newer versions. There aren't any super important changes, but a bunch of little things should work better or be a bit faster. Unfortunately, the new libraries cause a dll conflict with v570 and earlier (basically the executable sees the py310 dlls beside the py311 ones and gets confused), so we need to clear the install directory of the old files. Just do the clean install and you should be fine! You don't have to do it for the other modes because: the Windows installer basically does a clean install every time; the macOS App is always enclosed in its own thing doesn't have to worry about old files; and running from source doesn't care about dlls in the same way, although you might like to rebuild your venv today, just to catch up your own library versions. If you do have trouble booting v571, then please revert to your v570 backup and let me know what happened! There were no problems in the test that people tried a couple weeks ago, so I'm not expecting anything much, but I'll jump on any reports. Also, if you have been struggling with some annoying menu or drag and drop bug, let me know if the new version of Qt fixed you. the rest The archive/delete filter gets a couple of workflow changes: first, if you finish a filter and there is more than one possible local file service to delete from, those 'commit' buttons are now disabled for 1.2 seconds. This is to catch you from spamming 'enter' through this dialog when it is suddenly different (I've done this myself more than once). Second, if you hate the idea of these buttons being disabled, and you always want to delete from all local file services anyway, please hit the new 'when finishing filtering, always delete from all possible domains' checkbox under options->files and trash, which lets you always have a simple 'commit' dialog that only shows 'delete from all local file services'. The client now tries to load truncated images by default. The damaged images it now allows might be missing one pixel in the bottom right, or have a few lines of grey at the bottom, or might appear fine but just have some crazy metadata, but they won't, fingers crossed, fail with a 'malformed image' error any more. We had some stability problems with this mode some years ago, so I turned it off and only allowed it on in a debug menu on a per-session basis, but the situation seems to have cleared up, so it is now back on. If you need to turn it off, hit options->media. Any time you have a normal single column list in the program, e.g. the list of URLs in 'manage urls', you can hit Ctrl+C or Ctrl+Insert and now it'll copy better strings (e.g. without '(1)' decorator cruft), and it'll copy every row you have selected. I wrote a new emergency help document, 'help my db disappeared.txt', for the install_dir/db folder. If you ever boot up and get the 'this looks like the first time you have run the program' popup, there's now a guide to figure out what the hell happened. next week I didn't find the time to get to the 'share' menu rewrite, so I'll try again.
As I've started using kemono, I've noticed a lot of artists will put low quality or censored versions of their art on the Patreon/Pixiv/Fanbox page, and hide the higher quality or uncensored/decensorable files behind a link that I still have to manually download. Is there any way I can make a subscription that doesn't actually get the files, but just gives me the link with a notification so I can check them manually? Or would it be better to do this in a separate program? For now, for any artists I have this issue with, I just pause all subscriptions to keep out unnecessary files and check them monthly, re-activating the subs for other sites if uploads on kemono stop.
>>15009 This new version fixed that issue caused by something to do with newer versions of QT where windows would not properly remember which monitor they were on. But now the font on my secondary monitor is too small because that monitor is smaller, and I can't for the life of me remember or find where I can raise the font size in the options. I don't much mind if it causes the font on the main monitor to be larger in doing so.
(1.74 MB 1920x1080 Qt5.png)

(1.82 MB 1920x1080 Qt6.png)

Apologies if this had been fixed as I remember this being discussed But is there way to fix the Qt6 not playing well with windows scales above 100%? I've been sticking with Qt5 just cause the gui is scaled much much better. I could use Qt6 and just use 100% native windows scaling but it makes everything else look like ass on my craptop. I can get the thumbnails looking fine but the gui in particular is kinda fucked IIRC this was just something with Qt and not something with an easy fix? Looking at the screenshots side by side, it looks not as drastic as I thought, but it still annoys me.
i downloaded a bunch of high res wallpapers and now hydrus keeps crashing on me. fix?
>>15008 >>15008 Oh. "back" and "skip" are basically "previous" and "next". I didn't read the tooltips, because I was using Hydracula-alternate-tooltip-color, which specifies a light background but no text color, so text was light, too. The default theme does not specify anything, and background is still light.
The newest (arch) version of mpv, v0.38.0-2, seems to be breaking video playback. See picrel for the error I'm getting In case someone else is dealing with it: Downgrading to v0.37.0-2 fixes it for now. Also curious if this is a general problem, or just with the arch build.
Requesting a second opinion before filing an issue with Python API library - https://gitlab.com/cryzed/hydrus-api get_service_mapping() is failing on a fresh Hydrus install. Printing values of service to stdout, we can see the last one before the error is a string, not a dict. Client API version: v42 | Endpoint API version: v64 {'name': 'downloader tags', 'type': 5, 'type_pretty': 'local tag service', 'service_key': '646f776e6c6f616465722074616773'} {'name': 'my tags', 'type': 5, 'type_pretty': 'local tag service', 'service_key': '6c6f63616c2074616773'} {'name': 'my files', 'type': 2, 'type_pretty': 'local file domain', 'service_key': '6c6f63616c2066696c6573'} {'name': 'repository updates', 'type': 20, 'type_pretty': 'local update file domain', 'service_key': '7265706f7369746f72792075706461746573'} {'name': 'all local files', 'type': 15, 'type_pretty': 'virtual combined local file service', 'service_key': '616c6c206c6f63616c2066696c6573'} {'name': 'all my files', 'type': 21, 'type_pretty': 'virtual combined local media service', 'service_key': '616c6c206c6f63616c206d65646961'} {'name': 'all known files', 'type': 11, 'type_pretty': 'virtual combined file service', 'service_key': '616c6c206b6e6f776e2066696c6573'} {'name': 'all known tags', 'type': 10, 'type_pretty': 'virtual combined tag service', 'service_key': '616c6c206b6e6f776e2074616773'} {'name': 'trash', 'type': 14, 'type_pretty': 'local trash file domain', 'service_key': '7472617368'} 646f776e6c6f616465722074616773 Traceback (most recent call last): File "/home/g/script.py", line 88, in <module> sys.exit(main()) File "/home/g/script.py", line 74, in main mapping[service["name"]].append(service["service_key"]) TypeError: string indices must be integers
Would it be possible to make it so that 'additional urls' and 'parsed tags' are returned in the json response of the '/manage_pages/get_page_info' API call?
Looks like the old dark style tooltips work now after the Qt update.
Were regex substitutions changed in some way with case sensitive flags (or global flags in general)? Some of my parsers stopped parsing some tags that had a "^(?i)" at the start which now throws an error "global flags not at the start of the expression at position 1". Seems like it can be fixed by changing it to "(?i)^".
>>14984 >>14985 >>14986 Hmm, I think this is a bug, then. If you are definitely sure that the pair of files are being set as duplicate, and stuff that was true of the 'replaced' file is no longer true for the new king, then I think something is getting messed up. I'll write this down and have a look at things. Although, as I said, maybe I should just plan out a better visualisation of things here so we just have more and better data. In your case about some stuff being mutually exclusive, like merging alternates when other relationships are already set as false positive, the client tends to handle this by dissolving the prior incompatible relationships. This is a complicated case you've thought of though, so I'm not sure exactly how it handles it, which is prioritised etc... False positive stuff doesn't come up much, but if you do a lot of 'false positive' setting, then I wonder if that is what you are seeing. I will have a think about what you have written and see what I can do my end. >>14989 >It did import the images without any prompt however. I think this is correct. As long as hydrus is eating URLs, that should go through without any prompt. This is all pretty strange, I'm sorry to say. As much as it sucks, I think I have to mark this up as 'Wayland being a pain again'. Something here is being mis-handled in the DnD/clipboard level and is causing crashes. Perhaps I should be doing something better with the way I trigger the DnD event, but I am not expert enough to know what. If you use the built release, I recommend you move to running from source. If you run from source, I recommend playing around with your Qt version under the 'setup_venv' script. Maybe trying an older/newer release will stop the crashing. If you give this a go, I would be interested to know what you find. https://hydrusnetwork.github.io/hydrus/running_from_source.html And again, sorry to not offer proper help here, but if it hurts when you raise your arm, don't raise your arm: if DnDs crash your hydrus, don't do DnDs (for now). We'll see if a new version of Qt or python or whatever else fixes this problem. Let me know how you get on! >>14994 Thank you for this report. I do not know, but I will have a look at it. >>15006 This is crazy!! I use a package, 'send2trash' to help do multiplat 'send to recycle bin', so my only guess is it is failing in a very weird way here. Have you ever seen something like this before on Mint? Does Mint have per-program trash? That io.github.hydrusnetwork.hydrus name is something I only set a couple weeks ago. I'm pretty certain you can delete all that shit, I have never seen that before. I expect it is rolling in a bunch of temp files too. You can turn off the recycle bin behaviour under options->files and trash and 'when physically deleting files or folders, send them to the OS's recycle bin'. I will investigate send2trash's options--maybe there's a new flag I am not setting here. Thank you for this report.
(38.59 KB 275x294 d24.jpg)

how do i set it so that my files in the media viewer don't automatically zoom beyond 100%. I know I can press z, but I want the lower-quality images to be at their "native resolution" (don't think that's the term) while the high-quality images can fit my screen all the time
>>15007 Try help->debug->report modes->network report mode. It is a bit spammy, but it'll say the URLs with GET params and stuff like API/redirect actions. Let me know if it doesn't give you enough info. >>15010 >Is there any way I can make a subscription that doesn't actually get the files, but just gives me the link with a notification so I can check them manually? Not really, I'm afraid. Some users have been figuring out clever downloaders that parse for these sorts of URLs and try to chase them down, often for sites like patreon where this stuff abounds, but then of course the URL to chase is often some odd file host that hydrus just isn't clever enough to talk to. One solution I know, that is actually a bit like your thought, is to have the parser attach the high-quality link as a 'known url' of the bad quality file. You then have a low quality file with an URL you can click to quickly get to the actual thing you want. It isn't perfect, but it does smooth out the workflow a little. Ultimately, if you are chasing up artist posts with links, I think it is probably, for the most part, still an activity that needs human eyes. >>15011 Great, I am glad we are fixing some things! To change the global font size, check 'install_dir/static/qss' and load the 'Fixed Font Size Example.qss' file into a text editor. >>15012 Yeah, I'm afraid I am near-completely hands-off when it comes to UI scaling now. The way Qt6 does this in code is much better than Qt5, and I hate the subject so much (from failing to fix previous bugs) that I am just going to let it do its thing. Unfortunately, I guess due to five different OS variables like UI scale and font size and whatever else, the way Qt actually decides to figure out the problem is hit and miss with different people. I do know that most of the actual widget scaling is based, fundamentally, on font size (like a button will be 2.2 as high as the current font height etc..), so you might also like to check out 'install_dir/static/qss' and load the 'Fixed Font Size Example.qss' file into a text editor, as in my reply above. Maybe if you set a smaller font size, it'll give you some breathing room. Let me know what you figure out. >>15013 Can you upload or link me to the wallpapers here, so I can test on my end? And can you say more about your OS and how you are running hydrus (from the Windows installer, running from source, etc..)? If you check 'install_dir/db/client - 2024-04.log', is the stuff around time of the crashes full of information about 'out of memory' errors? Anything similar? >>15016 Thanks for figuring out the fix. How annoying. That's actually my code failing there, and on a fairly reasonable-sounding error (i.e. not some crazy bug on mpv's end), so I'll see if the newest Windows mpv release does the same thing. I wonder if mpv changed their API. >>15019 I don't have a test environment for his library right now, but that 646f776e6c6f616465722074616773 is the service_key for 'downloader tags', the same as your first item in that iterable, if that helps. I'm looking at his code and it seems simple so I am not sure what is happening. It is almost like my Client API is giving that bullshit service key in the wrong place. If it helps, this is the output of the '/get_services' call on a fresh client here: {"local_tags": [{"name": "downloader tags", "type": 5, "type_pretty": "local tag service", "service_key": "646f776e6c6f616465722074616773"}, {"name": "my tags", "type": 5, "type_pretty": "local tag service", "service_key": "6c6f63616c2074616773"}], "tag_repositories": [], "local_files": [{"name": "my files", "type": 2, "type_pretty": "local file domain", "service_key": "6c6f63616c2066696c6573"}], "local_updates": [{"name": "repository updates", "type": 20, "type_pretty": "local update file domain", "service_key": "7265706f7369746f72792075706461746573"}], "file_repositories": [], "all_local_files": [{"name": "all local files", "type": 15, "type_pretty": "virtual combined local file service", "service_key": "616c6c206c6f63616c2066696c6573"}], "all_local_media": [{"name": "all my files", "type": 21, "type_pretty": "virtual combined local media service", "service_key": "616c6c206c6f63616c206d65646961"}], "all_known_files": [{"name": "all known files", "type": 11, "type_pretty": "virtual combined file service", "service_key": "616c6c206b6e6f776e2066696c6573"}], "all_known_tags": [{"name": "all known tags", "type": 10, "type_pretty": "virtual combined tag service", "service_key": "616c6c206b6e6f776e2074616773"}], "trash": [{"name": "trash", "type": 14, "type_pretty": "local trash file domain", "service_key": "7472617368"}], "services": {"646f776e6c6f616465722074616773": {"name": "downloader tags", "type": 5, "type_pretty": "local tag service"}, "6c6f63616c2074616773": {"name": "my tags", "type": 5, "type_pretty": "local tag service"}, "6c6f63616c2066696c6573": {"name": "my files", "type": 2, "type_pretty": "local file domain"}, "7265706f7369746f72792075706461746573": {"name": "repository updates", "type": 20, "type_pretty": "local update file domain"}, "616c6c206c6f63616c2066696c6573": {"name": "all local files", "type": 15, "type_pretty": "virtual combined local file service"}, "616c6c206c6f63616c206d65646961": {"name": "all my files", "type": 21, "type_pretty": "virtual combined local media service"}, "616c6c206b6e6f776e2066696c6573": {"name": "all known files", "type": 11, "type_pretty": "virtual combined file service"}, "616c6c206b6e6f776e2074616773": {"name": "all known tags", "type": 10, "type_pretty": "virtual combined tag service"}, "6661766f757269746573": {"name": "favourites", "type": 7, "type_pretty": "local like/dislike rating service", "star_shape": "fat star"}, "7472617368": {"name": "trash", "type": 14, "type_pretty": "local trash file domain"}}, "version": 64, "hydrus_version": 571} That should be valid JSON. There's a heap of bullshit here in that I still have two different service reporting objects in the same method, but he's accessing the older structure I think. I think it is worth doing a little more exploration on your end, since it might be my mistake here, but otherwise, sure, report this as a bug. Maybe a recent Client API change I made (like adding the version info to every call) is messing up his parsing of the response here. Not sure. If you can do some debug printing, what's the full value you get for 'client.get_services()"? That should be basically the same as what I posted, I think! If it is, then I think it is cryzed's stuff messing up somehow.
>>15022 Sure, I'll see what I can do! >>15024 Great, thanks for letting me know. We got there in the end. I'll be testing 6.6.3 in the next few weeks, so if you notice a problem with that or other Qt tests, please let me know. I'm an inveterate whitemode boomer and forget too often about darkmode stuff when I do testing. >>15026 Ah, that could be a python 3.10/3.11 thing. I am not sure, but I know that the 're' library gets little weird rule tweaks like that in the different major python versions. I didn't do anything on purpose here, and if things worked in v570, I'm afraid I have to pass the buck and blame this on python. >>15029 options->media, and then in the 'media viewer filetype handling', go into the 'image' entry (and animation, video, whatever) and tell it to 'scale to 100%' or whatever you like for the different view scenarios. It is a bit autistic, but poke around and see what works for you.
>>15031 >options->media, and then in the 'media viewer filetype handling', go into the 'image' entry (and animation, video, whatever) and tell it to 'scale to 100% damn, so that's how! I read through the settings one by one but I guess I didn't actually understand what the settings meant X( Thank you so much
>>15030 Yeah it's an mpv API change, I ran into the same issue with a different program, also had to downgrade to 0.37 https://github.com/jaseg/python-mpv/issues/273
(1.80 MB 1920x1080 Qt6pt8font.png)

>>15030 okay, tried messing around with the font size a bit unfortunately, even knocking it down from the default (9 it seems) by 1 already looks too small in some way, going further down looks worse as well I think my biggest issue with it is the padding on widgets seems just a lot bigger than on Qt5 Guess I'll stick with Qt5
>>15030 >To change the global font size, check 'install_dir/static/qss' and load the 'Fixed Font Size Example.qss' file into a text editor. Danke.
I had a good if simple week. The 'share' menu (off of thumbnails or the media viewer) is completely overhauled, and everything in it added to the shortcuts system. The release should be as normal tomorrow. >>14994 Hey, I'm sorry to say I checked this out today and both login scripts, the guest and the user/pass, worked ok for me. However, if you are logged in with your browser and are using Hydrus Companion to copy cookies across, I think you should hit up network->logins->manage logins and hit 'flip active' on it, making it no longer active (and thus it won't check its cookies and try to initiate logins and stuff), since Hydrus Companion is handling it all. It looks like a normal login in HF does not give you PHPSESSID, only the guest login, so if your login script is checking for that but you are a normal login, that could be what you are seeing here. You might also need to tell HC to copy your browser's User-Agent across too, if you are only doing cookies. Not sure if HF cares about User-Agent, but it is common to more difficult login-sync problems. If you do not want to use HC and just want stuff to 'work', I recommend completely deleting the two HF login scripts and hitting 'add defaults' on 'manage login scripts', to add the defaults back in. Wipe your cookies for the domain too, and start back from square one. It works here, so if it isn't on yours even reset to defaults, I imagine it is possibly a Cloudflare-style block or maaaybe some odd user setting? I'm not familiar with HF accounts, so I can't talk too cleverly.
Is there a way to export/import deletion records? I set up a second install and, so far, it's the only thing I have left to set up.
>>14905 >I believe they fixed it soon after I posted that. This downloader now downloads the actual version (without descriptions). It also may not have been a bug, as now it depends on account settings and Hydrus has access to user-specific data.
Ok, I set up a dummy hydrus and got all the repository files, I merged them into the main ones files, and am going thought the relatively painfully slow process of processing them, it seems to do just fine for 3-4 then drags to an absolute crawl, not sure if this is just a visual bug or whatever. to check if this was going slower because of the client size, I restarted with nothing, and it was about the same, however I paused all network traffic and repository synchronization. I unpaused synchronization and it didn't start up, I had to manually tell it to start processing. so what I found was pausing all network traffic paused repository processing, my use of pausing all network traffic was just to kill anything from downloading when I didn't want it to, if its meant to also pause repository processing, I suggest changing its name, if its not mean to pause anything downloaded from being processed, there may be some code tangled in that, I suspect that it also wont let other processing happen automatically, but don't know a good way to go about testing that or even know all of the background crap that may go on in idle to begin checking everything.
https://www.youtube.com/watch?v=ZMbBsR6Hc6o windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v572/Hydrus.Network.572.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v572/Hydrus.Network.572.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v572/Hydrus.Network.572.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v572/Hydrus.Network.572.-.Linux.-.Executable.tar.zst I had a good simple week mostly doing just one thing: overhauling the 'share' menu. Note: If you are updating from v570 or earlier and you use the Windows or Linux .zip or .tar.zst 'Extract' releases, you have to do a clean install one time to get v571 or later! (https://github.com/hydrusnetwork/hydrus/releases/tag/v571, https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs). If you are a Windows installer/macOS App/source user, you do not need to do a clean install to get over the v570->v571 bump; just update as normal. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html share menu Like the 'open' menu a couple of weeks ago, the 'share' submenu you see off of the thumbnail or media viewer right-click menu has been given a full pass. The layout is clearer, and exactly the same in all locations, and everything it does is now mappable in the shortcuts system. Also, you can now copy files' thumbnails as bitmaps from this menu. Should let you do saucenao style lookups on unusual filetypes that have a thumb but no normal image. The shortcuts here are a bit smarter, too--like with the 'open similar files in a new page', I collapsed the four 'copy xxx hashes' commands down to one command with a dropdown (which has options for blurhash/pixel_hash, if you need them); and the three 'copy xxx bitmap' commands are now one with a dropdown that also has the new 'copy thumbnail bitmap' option. Most of the shortcuts also have a choice between 'do the focused file' vs 'do all selected files', if that is important (existing shortcuts will default to the latter, which was previous behaviour for stuff like 'copy sha256 hashes'). Note: I have removed the 'share on the local booru' command. The Local Booru (an optional way to share files with friends using hydrus) was a fun experiment, but I have decided to finally retire it. I never found enough time to dev it properly (I don't think I've touched it in at least five years), and there are many better ways to share files online, be that a third-party host you simply drag and drop to or a clever Client API solution. So, it is gone from the menu today, and I'll slowly dismantle it over the next few weeks until it is completely removed. couple other things There's a new checkbox under options->files and trash that adjusts the 'remove processed files after an archive/delete filter' to also remove the stuff you skipped. Also, thanks to a user, there's a new 'e621' themed stylesheet under options->style. Users who run from source who want to try this probably have an extra step to make sure this works--check the changelog/options panel help text. next week I've been thinking of adding a new tab to the autocomplete dropdown that will show children of the current context (e.g. you are looking at something with 'evangelion', in that tab you'll see 'asuka', 'rei', etc.. for quick selection). I think I will play with that idea and do some more misc small jobs.
Edited last time by hydrus_dev on 04/24/2024 (Wed) 22:08:17.
What is the client_updates folder for? It has modified timestamps from 2017 so am I safe in assuming it's legacy cruft that could be removed?
It seems like Hydrus can only interleave downloads that are on different pages. And sometimes either the file page, or the image file, or both are not accessible either with or without a proxy. So it is useful to enter URLs for different sites into different url pages. It would help if "watch clipboard for urls" had a way to filter URLs, so all the errors for a particular domain stay on a separate page. For example, I would download image files when I get the right IP on the proxy or when server access is unblocked by local authorities, and then I would download the descriptions without the proxy.
scoop update to v572 seems to be busted. I get this error Could not find 'Hydrus Network'! (error 16) At C:\Users\stallman\scoop\apps\scoop\current\lib\core.ps1:860 char:9 + throw "Could not find '$(fname $from)'! (error $($proc.ExitCo ... + ~~~~~~~~~~~~~ + CategoryInfo : OperationStopped: (Could not find ...rk'! (error 16):String) [], RuntimeExcepti on + FullyQualifiedErrorId : Could not find 'Hydrus Network'! (error 16)
(165.22 KB 499x376 woah crash.png)

>You can hit shift+enter to create an OR entry without opening the OR predicate window Woah.
Almost off-topic, sorry, but have you seen https://github.com/astral-sh/uv hydev? It is a faster, drop-in pip replacement. It's pretty cool; I've been using this for a few releases since I'm running the client from source and it's a lot faster than pip (and as reliable from my light testing). I'm guessing you play around with dependencies quite a bunch, so I think you should give it a try!
Hey guys, I had a script going for a site, and they changed their creator and title tags in the HTML. So, I've ended up with a couple thousand pics missing those tags now. I've fixed the problem in my parser, but how would I got about re-downloading tags for those pics, so I can get the missing title and creator tags added to them?
I have a strange issue. I encountered some files that hydrus wouldn't properly delete while the client was running. Essentially what happened was that from the client, if I told it to delete the file, it would still show the file as in the database. It would lack a delete reason and not show the trash icon. I could choose the delete option as many times as I wanted, and it would pretend to delete it, but refreshing the page with the hashes and search set to all known files would show it as still in the database and not deleted. I could even add it to one of the file domains if I wanted. However, Hydrus did physically delete the file attached to the record (it moved it to the recycle bin). If you tried to view the file, it would say the file was not found. I could manually restore the file from the recycle bin and then it would work. If I deleted the file again from hydrus, it would skip the recycle bin for some reason. If imported through hydrus, it would see the file had been deleted and not import unless I told it to delete without leaving a delete record (even if I told it to delete it with one, I could re-delete it and remove the delete record). After reimporting this way, it would again move the file to the recycle bin if I chose to delete it. I copied the files from a recent backup to another folder so I could try to pinpoint the cause. In the end I am not sure what actually fixed the problem. I started with 16 files, then when i tested a random file I noticed it had the same problem of hydrus putting the file in a state of being deleted but not at the same time. I was initially on version 569 when I noticed. I decided to do a backup of the client files folder and updated to 571 with a clean install. I then noticed that the 1 random file was marked as deleted finally with a reason. Even more strange, I was able to get it to mark 13 out of the 16 original problematic files as deleted as usual. But the 3 that didn't get marked as deleted still showed the behavior. Closing and reopening the client seemed to fix the 3 remaining files . I checked the database integrity, and that found 0 errors. I also tried some of the options relating to orphan records. These options seemed to find no issues and finished quickly.
>>15052 On a "url import" page, click "import options" and change "check URLs to determine "already in db/previously deleted"" to "do not check" Make sure the URLs are not in the page's "file log". Select the files without those tags and copy their file page URLs to download them again (do not delete the URLs).
Is there a way to reset your colors to default once they've been changed?
I'm dumb as a rock, how can I drag and drop twitter links to my download -> urls page? I usually do this with booru pages but it doesn't work with twitter. I tried looking into a twitter parser but I have no idea how to make it work
>>15033 Thanks for this! I'll roll in 1.0.6 next week and should be fixed. I would have done it for v572, but it turns out he renamed the library from python-mpv to mpv on pypi, and the old name never got the fix, so I thought I was waiting! >>15034 Ah, shame. There's probably a way to set 'padding: 0;' and 'margin: 0;' on the core Widget in the same way the text overwrite works, but I am not sure how that would propagate through the rest of sub-widgets. You might like to try it, and perhaps compare with the other .qss files in the folder to see how they do things. Let me know if you discover anything good, and if you figure out a 'this works' .qss, please send it in and I'll add it to the defaults however is appropriate. >>15039 I don't think there is! I'll have a think about this. It isn't a super simple database thing either, so I can't tell you the easy SQL to run. I'll have a think, probably open it up on the Client API and see how we could do it with sidecar import/export too. >>15042 Yeah don't worry if PTR processing suddenly slows to 1 row/s for a bit. There are some bumps, and whenever it saves to disk it has to pause a bit. Thanks for letting me know about pausing network traffic also pausing repo processing. I wonder what's going on there. When you switch the pause state of that, it does send a signal to the client, which many other things catch. I wonder if that is resetting some maintenance state or something and the repo processing is resetting itself in the same way. I'll look into it! Normally, though, repository processing is meant to happen in 'idle time' when you aren't using the program (or, mostly, your computer). The options for when idle time kicks in are under options->maintenance and processing. I recommend people not try to rush it and just let PTR syncing happen during normal background and shutdown work, and they'll be caught up in a few weeks. This also gives their database a change to grow slowly and organically, which keeps maintenance simple. Pushing it too hard can caught some un-optimised searches for a little while until the normal maintenance timers catch up to the sudden growth. If pausing network traffic also paused ptr processing, was this in idle time? If so, moving the mouse probably kicked you out of idle time, so that was probably what stopped ptr processing. Sound correct, or did this happen too after having forced processing to start under review services? >>15044 Yeah I think you can delete that. We don't use that any more. >>15046 Interesting idea! I want to add some options to the URL watcher, so I will keep this in mind. >>15048 Sorry for this trouble, and thank you for the report. I have not seen that error before, and it looks pretty core/low level. Were you updating the .zip release from v570 or earlier? There are special update instructions for the (<=v570)->(>=5711) step, check >>15009 --you have to do a 'clean install'. Is that it? If you did do a clean install, are you sure it worked correct, that all the dlls were cleared out and then re-extracted? That error looks like, I don't know, like the PyInstaller bootloader isn't working correctly. No idea what scoop is, either--is that a package manager or something, that you run? How do you normally install/run hydrus--the Windows .zip extract release, or another way?
>>15051 That looks interesting, thanks, I will check it out. I am happy with pip, especially since it is so compatible and works out of the box, but this is something to keep in mind. >>15053 Damn, I am sorry for the trouble. This certainly sounds like my code messing up. I was going to suggest running database->db maintenance->clear/fix orphan file records, but it sounds like you did that already. I am not sure what is going on here, but if program restarts are fixing it, it sounds like the delete command probably _is_ going through, but the UI element isn't getting the update because, let's say, the panel that shows what is deleted is stuck on a different file or something. Or the UI element is not sending the delete command due to a similar desync issue. I am exploring a problem similar to this related to setting ratings in the archive/delete filter, where sometimes the top-right hover window just doesn't get updates and I don't know why it happens. I will investigate if the same thing could be happening to the file deletion state somehow. Please let me know if you discover anything else here, particularly anything that makes the problem more or less likely to happen. Thank you for the report. >>15056 Sorry, I don't have good 'reset to default' support. Default colours are pic related, the top blue is #d9f2ff, the thumb border grey is #dfe3e6, and the autocomplete background is #ebf8ff. The rest is basically black/white or rare enough it doesn't matter. >>15057 Unfortunately, since Elon took over, twitter shut like a clam and it is now very difficult to download from them. Basically they got DDoSed by all the guys trying to train text AI models, so he shut off all easy API access to twitter. We just don't have a good twitter downloader any more, and mirror services like Nitter have also been shutting down. As it happens, I may have a solution for next week, no promises.
>>15056 >>15059 My boomer brain yet again completely forgot about the existence of darkmode, ha ha ha. Here's everything, in 0-255 RGB. I'd love to figure out options 'reset to default' for everything one day, but there's a heap of cleanup I have to do first, I'm afraid.
>>15048 >>15058 oh I wasn't aware of that. I'll try to make a clean install next time I get to it :)
>>15059 That sucks... For now I'll just download the most important stuff manually. >spoiler I'm looking forward to it anyway, thanks!
>>15055 Thanks! I'll try this when I get a chance.
Hydrus (arch, installed from source) no longer starts and I get this error when running setup_venv.sh ERROR: Ignored the following versions that require a different python version: 6.0.0 Requires-Python >=3.6, <3.10; 6.0.0a1.dev1606911628 Requires-Python >=3.6, <3.10; 6.0.1 Requires-Python >=3.6, <3.10; 6.0.2 Requires-Python >=3.6, <3.10; 6.0.3 Requires-Python >=3.6, <3.10; 6.0.4 Requires-Python >=3.6, <3.10; 6.1.0 Requires-Python >=3.6, <3.10; 6.1.1 Requires-Python >=3.6, <3.10; 6.1.2 Requires-Python >=3.6, <3.10; 6.1.3 Requires-Python >=3.6, <3.10; 6.2.0 Requires-Python >=3.6, <3.11; 6.2.1 Requires-Python >=3.6, <3.11; 6.2.2 Requires-Python >=3.6, <3.11; 6.2.2.1 Requires-Python >=3.6, <3.11; 6.2.3 Requires-Python >=3.6, <3.11; 6.2.4 Requires-Python >=3.6, <3.11; 6.3.0 Requires-Python <3.11,>=3.6; 6.3.1 Requires-Python <3.11,>=3.6; 6.3.2 Requires-Python <3.11,>=3.6; 6.4.0 Requires-Python <3.11,>=3.6; 6.4.0.1 Requires-Python <3.12,>=3.7; 6.4.1 Requires-Python <3.12,>=3.7; 6.4.2 Requires-Python <3.12,>=3.7; 6.4.3 Requires-Python <3.12,>=3.7; 6.5.0 Requires-Python <3.12,>=3.7; 6.5.1 Requires-Python <3.12,>=3.7; 6.5.1.1 Requires-Python <3.12,>=3.7; 6.5.2 Requires-Python <3.12,>=3.7; 6.5.3 Requires-Python <3.12,>=3.7 ERROR: Could not find a version that satisfies the requirement PySide6==6.4.1 (from versions: 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.3.1, 6.7.0) ERROR: No matching distribution found for PySide6==6.4.
>>15069 Thank you for this report! Am I correct in saying you are choosing the '(a)dvanced' install choice, and then selecting the '(m)iddle qt' version? And are you on Python 3.12? I need to update my recommendation text there, since I think Python 3.12 (or maybe 3.13?) can't run that slightly older package. That version is also older since we moved up to 6.6.0 recently. I will fix this for next week, sorry for the trouble! Please run the setup_venv again and choose '(t)est' instead. That should work on your newer python. If it doesn't, try the '(w)rite your own', and then enter 6.6.0 explicitly, with 2.4.1 when it asks for the custom qtpy version.
>>15072 Yes, I'm on 3.12.3 Choosing (t)est worked and hydrus boots now. Thanks!
I forgot I had a small part of hydrus_files in a separate directory, and thought I had copied them there, so I imported the directory with "delete original files after successful import" checked. Fortunately I noticed Hydrus complaining about it while a snapshot was still there (maybe the message should be more prominent than the one about new downloads or a failure to delete a file because it is archived). Can Hydrus do something to prevent this?
Are there any hydrus companion alternatives for firefox? The old version that was still working for me started eating up 10+ gigs of RAM so I had to disable it.
>>15078 not him, but I'm having the same issue. >A nightly/developer or unbranded edition of Firefox is required The firefox build in my repo is branded, so this won't work.
I had a good week. I improved some quality of life, added an experimental new tagging tab, and have a simple tweet downloader working again. The release should be as normal tomorrow.
We've had working Twitter parsers for months in the Discord server...
>>15081 >Discord server
>>15058 I have hydrus ignore most things for idle and just have it consider 50% on a single core not idle, but at the same time im not sure i ever seen it consider something like importing a thread of webm's, something that spikes the cpu to 100%, not an idle state.
>>15080 I'm really looking forward to the new tweet downloader. I haven't saved images from twitter regularly since it broke.
>>15079 You can download the dev edition and delete the contents of the profile it makes, then copy the contents of your regular profile over to it. I just added hydrus companion with the method that anon posted but I'll have to wait a few hours to see if there's still memory issues.
https://www.youtube.com/watch?v=1glH_p16WB0 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v573/Hydrus.Network.573.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v573/Hydrus.Network.573.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v573/Hydrus.Network.573.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v573/Hydrus.Network.573.-.Linux.-.Executable.tar.zst I had a good week working on some quality of life and a tagging experiment. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html my experiment I've written a new tab, 'children', for the tag autocomplete dropdowns. It shows the tag children of anything in the list the autocomplete is managing. The idea here is you have 'evangelion' in your manage tags dialog but you have forgotten the names of the characters--there is now an easy-select list for you. Since this is a first draft, it is workflow-ugly. It works well when there are only one or two parent tags in the list, but any more than that (or something with hundreds of children like pokemon or azur lane) and it becomes spam. Please have a play with it, especially if you are an advanced user, and let me know what you think. I think I'll filter and sort it according to the top n most-counted tags, and there's probably some simple selection-preservation stuff to add, to better let you run through its list quick-hitting enter over and over (atm it resets the selection position on every change). other stuff With the help of a user, I've written a simple twitter tweet downloader. This isn't a full search, just something that can download single tweets so you can drag-and-drop tweet URLs onto the client or paste them into an 'urls downloader'. It can handle video and multi-media posts, and it fetches quote-tweet contents too. I've learned there is a different, more advanced twitter downloader written by another user--if you use this, my new tweet downloader will not be added today, since you have that ability already. I'll see if I can add some of the other downloader's tech to the defaults in the near future. The archive/delete and duplicate filters now yes/no confirm with you when you select to 'forget' your work. All the builds should now be compatible with very new (>=0.38.x) versions of mpv/libmpv. If you run from source, you might like to rebuild your venv this week. I continued to deconstruct the local booru (an old way of sharing files from the client) this week. It now no longer starts its server. If you used this, please note that this Client API project replaces it very well: https://github.com/floogulinc/hyshare next week I am pretty exhausted, I'm sorry to say, so I'll keep it simple and just do some more quality of life and little bug fixing work.
>>15085 So unfortunately it still starts using up a bunch of ram and slows firefox down after a few hours. Guess I'll have to leave it disabled.
>>15087 what I would do when I didn't have an 8 core pc and the processes mattered (my phenom 955 had a bug that if windows went over 150 total processes it was going to hard crash within a few hours) was I just disabled the extension when it wasn't being actively used. granted almost all my use of the extension was down to getting cookies, I use other extensions to bring urls to hydrus to parse out images.
so hdev, I haven't had a chance to really test out the new version of hydrus, I went from 566 to 572 and then to 573, but it feels like some of the crashing issues aren't problems anymore. however, the inntal move to 572 saw hydrus crash for reasons I have no idea, I was planning to just roll back but decided on going to 573 to see if that fixed it, I dont know what the difference was but this fixed the crashing from 572 once the repository is fully processed i'm going to see if I can force crashes by switching tabs, and also see if i can kill hydrus from mpv loading. I would do it now, but I want that repository done and over with, to which it seems that the newer versions also improved the processing performance there as well.
Have you considered adding some sort of "auto split" feature that takes cbz files and automatically exports the contents then re-imports them with the metadata likes tags and urls intact?
>>15076 Just to make sure I understand, you had moved some of the 'client_files' directory, your normal hydrus file storage, to a different location, and then attempted to import it into that same client with 'delete original files' on? I am sorry the program does not detect this better! I will make sure it highlights this and does not let you do the import. >>15081 Thanks, I did not know. I hope to integrate parts of the other one this week. >>15083 Interesting. That CPU test was always a little hacky, and I'd bet there are some odd calculation issues when it comes to virtual vs real core and stuff. I think it basically just polls the current CPU usage every 90 seconds or something, so it isn't too clever. I think the default is 25% use, so maybe that will detect more helpfully often for you? Maybe I should just rewrite it to something smarter, too. >>15090 I am very glad if there are fewer crashes for you. This v573 had a new version of the python mpv library, although it wasn't a significant version change, mostly just adding support for mpv 0.38.x, but maaaaybe there was something in there that helped you. We also moved up to Python 3.11 in the built releases in the past few weeks, so I could see how that might change what crashes in a system that already wasn't happy with hydrus overall. Let me know how you get on in future, and if you still have trouble, and if we haven't talked about it before, you might want to look into running hydrus from source: https://hydrusnetwork.github.io/hydrus/running_from_source.html It is pretty easy to set up these days, and it reduces crashes in a number of tricky situations. >>15091 Yeah, that's the dream in the end. Also the reverse, of zipping a chapter of images into one contiguous blob while preserving useful metadata. I'd also like simple navigation without unzipping, so you can browse inside an archive of images like you would with any other CBZ reader program. Along with this, I want to add a bunch of 'virtual package of images' tech to the client so whether you have a bunch of images or an actual cbz, you will be able to treat it as a contiguous chapter in the client, as one virtual/real file, and also expand it to the internal single-image objects too. Users mostly shouldn't care about what the actual file format is--they just want to read a chapter and then tuck it away. Lots of thoughts on this, way too much work to do, but I see some ways forward.
Your Twitter downloader is only getting the resized images, not the original files
When forcing a subscription check, many suscriptions are labelled "dead". Do you think there could be an option to ignore or a button to dismiss the dead (subscriptions)?
>>15092 there is currently one issue that I noticed, and that was whatever reddit parser I am using is throwing 403 errors since moving from 566 to 573 so i'm trying to find out of that's a me issue, a reddit issue, or if something broke and yea, I was putting off trying to go to source till the repository was settled, though the need may not be there anymore, at the rate its going maybe 2 more days before I really test it.
>>15092 >>>15076 >Just to make sure I understand, you had moved some of the 'client_files' directory, your normal hydrus file storage, to a different location, and then attempted to import it into that same client with 'delete original files' on? I am sorry the program does not detect this better! I will make sure it highlights this and does not let you do the import. First, I added a new media files directory using Hydrus, and Hydrus moved only one subdirectory there. Irrelevant, but what made it confusing was that when the big main directory became read-only, I used overlayfs to put any changes onto the drive where the small subdirectory was, so they were side-by-side. Then I copied the big directory onto the different drive and synchronized it with the overlayfs, and mounted it to the original location of the main directory instead of the overlayfs. But I forgot that the small subdirectory in the directory where the subdirectory with the upper (modifiable) filesystem of overlayfs also was, was still a part of what hydrus was using. So I imported it, destructively. Also, how should I tell Hydrus that the files are now in a different directory, so I don't have to mount the directories here and there? If you are curious, the overlayfs documentation is in any of the "overlayfs.*" files in the linux-doc kernel documentation.
Is there any parser that adds danbooru description/Artist's commentary just like the e621 parser
Hello! I have a rather strange problem. So my database corrupted due to bad hdd sectors, but I restored it using the instructions in the db's folder and now everything works. But as I understand some data was lost. Is there a way in hydrus network to call up all videos and images without tags so that I can download them again?
>>15103 When you click the search field, choose "system:number of tags", and optionally "system:filetype"
(379.49 KB 1380x2062 hydrus_client_XU7Sg0rwug.png)

>>15095 ok im just going to leave this here as well, I asked this in the discord --------------------------- Ok having some issues with reddit subreddit search, it appears all galleries are getting 403, I can't figure out if this is a hydrus issue, a parser issue, a me issue, or a reddit issue, but it seems to coincide with me upgradeing to 572/573 from 566, so I cant tell if something recently broke, has been broken for longer than now but that should give a time frame (the image I submitted with this) the specific 3 urls highlighted https://www.reddit.com/gallery/1cguv1q https://preview.redd.it/m0mmove3rmxc1.png?auto=webp&format=png&s=e335aace881af09464f275848a4c82663ab6d2c5&width=501 https://preview.redd.it/uls74we3rmxc1.png?auto=webp&format=png&s=f0765395491490029692bb3cd3e19b11067c8fa2&width=505 and if anyone wants to try this to see if it works on their end, I made a subscription with reddit subreddit search and used ClipStudio as the subreddit, a lot of the crap wont download but I think those never worked, this is specifically the galleries i'm trying to figure out. ------------------------------ looking at the user repository, none of the parsers were touched in 2-3 years for reddit, and the problem lining up with the update along with the parsers working, just not for galleries, I think something changed in hydrus itself to make them not work
>>15104 Based on the results, there is no such data. Just some kind of miracle. Thanks for the advice. Now all that remains is to transfer the database and make a backup!
>>15106 Check the file service and tag service buttons
>>15086 >>15092 >>15093 Can confirm. The twitter downloader in v573 is not getting the original images, only the preview images. Otherwise it seems to work fine. Will this new Twitter downloader ever be able to bring back subscriptions?
(271.56 KB 960x731 1677763981808893.jpg)

>>14270 came here after a recommendation from /g/. I used Exiftool to tag my files. Your app looks very promising, Anon, I'll try it now and give a feedack. I'm fed up with a lot of things going on 4leaf. Your software could be my entry to the rabbit hole.
>>15093 >>15109 Yeah the image urls need a `name=orig` query parameter added. These URL classes should handle that for you.
I seem to be getting a few of these errors where the thumbnails/image viewer render for specific random files breaks, and if I try posting the image on a place like an imageboard it yells at me about corrupted metadata, but I seem to be able to open the actual image itself well enough in my browser/image viewer. Any pointers on what might be going wrong and how I could go about fixing it or if I should take some kind of precaution regarding this problem? The traceback information seems to tell me this. v569, win32, frozen DamagedOrUnusualFileException Could not load the image at "D:\Program Files\Hydrus\hydrus_files\fbe\beabc2eb0474b3f2174122ef6185b42f2c10c8ca7df71c6e2caa0197573f0407.png"--it was likely malformed! File "hydrus\client\gui\widgets\ClientGUICommon.py", line 265, in EventButton File "hydrus\client\gui\canvas\ClientGUICanvasHoverFrames.py", line 973, in _ShowFileEmbeddedMetadata File "hydrus\client\gui\ClientGUIMediaActions.py", line 840, in ShowFileEmbeddedMetadata File "hydrus\core\files\images\HydrusImageOpening.py", line 14, in RawOpenPILImage
I had a good week. I wrote a new database maintenance routine (which should fix the PTR '404' bug) and some small UI improvements and code cleanup. The release should be as normal tomorrow.
(49.89 KB 684x84 08-07:25:08.png)

(10.51 KB 897x23 08-07:25:38.png)

For some reason the e/exhentai downloader is borked after updating from 568(I think) to 573. I even updated to the "new" one to try to fix it, no luck. Looking back at it, the older downloader also does some weird shit, I have no idea how this is supposed to work, anyone have tips for me to get this working again?
>>15115 I managed to fix it, but it took me like an hour and a half. It is really complicated for some reason. I remember that it involves matching the initial url from the post page, then turning it into a fake url that matches a made up url class for the first parser (that's wrong and won't work) then passing it to a second parser that turns the incorrect url into the correct one with a complex regex find-and-replace, and finally parsing that url. I don't know why it broke, but I also don't know why it's this complicated in the first place. I've written a few parsers before, and I've never seen one like this.
Any idea of why hydownloader skips 20 files on everything?
>>15116 If you managed to fix it, please post the parser.
>>15116 >took me like an hour and a half. Your a lot faster than I am at parsers!
>>15092 Ok, repository done, after repository finished I was left with around 600 open tabs from subscriptions I was opening in the meantime, so I have been trying to consolidate them. and hydrus is faster... until something decided to process without any kind of a window or indication it was doing anything causing everything to be a nightmarish crawl, I would get about 10-15 tabs dealt with in-between. I have no idea what it was, but it seemed to go away when I set the cpu idle to 25% so whatever was going on stopped. now with that out of the way, I went on to considateing tabs, and I hit the issue where it just crashes/hangs, I let it sit for about half an hour the client was eating 10gb of ram, it then halved the ram use, and then halved it again, there is nothing in the log that tells me anything about whats going on, so I may en up trying to build from source soon and see if that stops anything, going ot reload the client now and see if mpv's problems are solved. I should note that when I closed hydrus, I gained 10gb of ram back instead of just 2.25gb, even though task manager said it gave the ram up it apparently didn't. now, im the mpv side of things I just went thought a 202 file page 10 times so essentially opening videos 2020 times, USUALLY this results in a falure somewhere around 5-7 (something you did a while back helped and it would actually put an error in the logs) but hydrus is still going strong, so the move to the new python either extended when the failure happens, or eliminated the failure. so 1 of the 2 crashes I can replicate are dealt with, with the one that's still in the program being better than it was before, though I have no idea why it got introduced in the first place.
>>15120 right after I posted that, I clicked the tab to deal with tabs, and it hit the crash condition, so ill revise saying that its better, it may just be as bad as before but I had a few good runs.
https://www.youtube.com/watch?v=XBETVhHpcPk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v574/Hydrus.Network.574.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v574/Hydrus.Network.574.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v574/Hydrus.Network.574.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v574/Hydrus.Network.574.-.Linux.-.Executable.tar.zst Note: If you are updating from v570 or earlier and you use the Windows or Linux .zip or .tar.zst 'Extract' releases, you have to do a clean install one time to get v571 or later! (https://github.com/hydrusnetwork/hydrus/releases/tag/v571, https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs). If you are a Windows installer/macOS App/source user, you do not need to do a clean install to get over the v570->v571 bump; just update as normal. I had a good week. The database has some new repair tech, and the new default twitter downloader now gets full-size images. The update this week may take a minute due to some database maintenance. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights For some time, some users who sync with the PTR have had sporadic '404' errors that interrupt downloading/processing. We never pinned down exactly what was causing or fixing this (a bunch of different maintenance routines seemed to fix it eventually) until this past week. Long story short is there was a SQLite issue that was crossing a wire on some program crashes, and a bunch of new repair code now detects and fixes this situation. Several rare 'this file appears to be this file' issues are fixed as a result. Your 'local hashes cache' will do some work on update, but the new code should run pretty fast, so even if you have millions of files, it shouldn't take too long to finish. If you have had PTR syncing problems, please try unpausing after updating this week--while it may fail one more time, the new repair code should be able to fix it now. Let me know how you get on! I screwed up the new default twitter downloader last week--it was not getting the full-size versions of images! This is fixed today, so if you spammed a bunch of tweet URLs last week, please do this to fix it: do a search for 'imported within the last 7 days/has a twitter tweet url/height=1200px' and then copy/paste the results' tweet URLs into a new urls downloader page. The full-size images should download. Once done, assuming the file count is the same on both pages, go back to your first search page and delete the 1200px tall files. Then repeat for 'system:width=1200px'! Sorry for the trouble! If some of your files get an odd colour tint, is it always on files under 'system:file properties-&gt;has icc profile'? If so, try unchecking the new 'apply image ICC Profile colours adjustments' checkbox under options-&gt;media. Let me know what you see. next week I blinked and it is May already. I think I have four more releases before my vacation week, and it feels like I haven't done anything, so I'd like to get a larger job done. I want to try the first framework of an automatic duplicate pair resolver.
>I want to try the first framework of an automatic duplicate pair resolver. THANK YOU!!
(234.17 KB 1298x559 1493729549_anarth.gif)

>>15122 Hi devanon. I updated from source straight from v568 to v574 on Linux, after running "setup_venv.sh" of course. It went flawless. Thank you.
On May 8 (05-08), hydrus-573 started hanging and then I noticed that it says the database is malformed. client.mappings.db is 55GB. PRAGMA integrity_check; * in database main * Multiple uses for byte 2697 of page 10835171 Runtime error: database disk image is malformed (11) A backup from 05-03 (likely saved by hydrus-572) has the same error, but Hydrus managed to work for a few days. I have backups from 03-16 03-29 04-11 04-23 04-27. Cloning it creates a 24GB file. I am not sure why it says "Error: database or disk is full". The file system is XFS, and I have deleted many small files, including all of the other copies of hydrus. There is enough apparent free space. SQLite version 3.45.1 2024-01-30 16:01:20 Enter ".help" for usage hints. sqlite> .clone client.mappings.db.clone-20240510T0121.db current_mappings_9... done deleted_mappings_9... done pending_mappings_9... done petitioned_mappings_9... done current_mappings_10... done deleted_mappings_10... done pending_mappings_10... done petitioned_mappings_10... done current_mappings_15... Warning: cannot step "current_mappings_15" backwardsdone deleted_mappings_15... done pending_mappings_15... done petitioned_mappings_15... done sqlite_stat1... Error: object name reserved for internal use: sqlite_stat1 SQL: [CREATE TABLE sqlite_stat1(tbl,idx,stat)] Error 1: no such table: sqlite_stat1 on [INSERT OR IGNORE INTO "sqlite_stat1" VALUES(?,?,?);] done current_mappings_17... done deleted_mappings_17... done pending_mappings_17... done petitioned_mappings_17... done current_mappings_31... done deleted_mappings_31... done pending_mappings_31... done petitioned_mappings_31... done current_mappings_32... done
[Expand Post]deleted_mappings_32... done pending_mappings_32... done petitioned_mappings_32... done current_mappings_33... done deleted_mappings_33... done pending_mappings_33... done petitioned_mappings_33... done current_mappings_34... done deleted_mappings_34... done pending_mappings_34... done petitioned_mappings_34... done current_mappings_38... done deleted_mappings_38... done pending_mappings_38... done petitioned_mappings_38... done current_mappings_39... done deleted_mappings_39... done pending_mappings_39... done petitioned_mappings_39... done current_mappings_40... done deleted_mappings_40... done pending_mappings_40... done petitioned_mappings_40... done current_mappings_43... done deleted_mappings_43... done pending_mappings_43... done petitioned_mappings_43... done current_mappings_50... done deleted_mappings_50... done pending_mappings_50... done petitioned_mappings_50... done current_mappings_9_hash_id_tag_id_index... done deleted_mappings_9_hash_id_tag_id_index... done pending_mappings_9_hash_id_tag_id_index... done petitioned_mappings_9_hash_id_tag_id_index... done current_mappings_10_hash_id_tag_id_index... done deleted_mappings_10_hash_id_tag_id_index... done pending_mappings_10_hash_id_tag_id_index... done petitioned_mappings_10_hash_id_tag_id_index... done current_mappings_15_hash_id_tag_id_index... Error: database or disk is full SQL: [CREATE UNIQUE INDEX current_mappings_15_hash_id_tag_id_index ON current_mappings_15 (hash_id, tag_id)] done deleted_mappings_15_hash_id_tag_id_index... done pending_mappings_15_hash_id_tag_id_index... done petitioned_mappings_15_hash_id_tag_id_index... done current_mappings_17_hash_id_tag_id_index... done deleted_mappings_17_hash_id_tag_id_index... done pending_mappings_17_hash_id_tag_id_index... done petitioned_mappings_17_hash_id_tag_id_index... done current_mappings_31_hash_id_tag_id_index... done deleted_mappings_31_hash_id_tag_id_index... done pending_mappings_31_hash_id_tag_id_index... done petitioned_mappings_31_hash_id_tag_id_index... done current_mappings_32_hash_id_tag_id_index... done deleted_mappings_32_hash_id_tag_id_index... done pending_mappings_32_hash_id_tag_id_index... done petitioned_mappings_32_hash_id_tag_id_index... done current_mappings_33_hash_id_tag_id_index... done deleted_mappings_33_hash_id_tag_id_index... done pending_mappings_33_hash_id_tag_id_index... done petitioned_mappings_33_hash_id_tag_id_index... done current_mappings_34_hash_id_tag_id_index... done deleted_mappings_34_hash_id_tag_id_index... done pending_mappings_34_hash_id_tag_id_index... done petitioned_mappings_34_hash_id_tag_id_index... done current_mappings_38_hash_id_tag_id_index... done deleted_mappings_38_hash_id_tag_id_index... done pending_mappings_38_hash_id_tag_id_index... done petitioned_mappings_38_hash_id_tag_id_index... done current_mappings_39_hash_id_tag_id_index... done deleted_mappings_39_hash_id_tag_id_index... done pending_mappings_39_hash_id_tag_id_index... done petitioned_mappings_39_hash_id_tag_id_index... done current_mappings_40_hash_id_tag_id_index... done deleted_mappings_40_hash_id_tag_id_index... done pending_mappings_40_hash_id_tag_id_index... done petitioned_mappings_40_hash_id_tag_id_index... done current_mappings_43_hash_id_tag_id_index... done deleted_mappings_43_hash_id_tag_id_index... done pending_mappings_43_hash_id_tag_id_index... done petitioned_mappings_43_hash_id_tag_id_index... done current_mappings_50_hash_id_tag_id_index... done deleted_mappings_50_hash_id_tag_id_index... done pending_mappings_50_hash_id_tag_id_index... done petitioned_mappings_50_hash_id_tag_id_index... done sqlite> . Integrity check of the clone passes, but Hydrus does not start and leaves a broken db. SQLite version 3.45.1 2024-01-30 16:01:20 Enter ".help" for usage hints. sqlite> PRAGMA quick_check; Runtime error: database disk image is malformed (11) sqlite> PRAGMA integrity_check; Runtime error: database disk image is malformed (11)
This annoys me. If the preferred usage is "medium:english text" why doesn't it give me that as the top result? Instead I should add the wrong tag "meta:english text" which then gets turned into "medium:english text" via siblings. Seems backwards. This behavior happens all the time with character tags for example. Can't tag suggestions be a little smarter?
>>15129 While that annoys me too, the sibling only shows that it is preferred by the PTR moderators, and we may disagree on some other tag.
I'm trying to make a note parser, but the content is given as raw html. Is there some way to "clean" it and make it plain text with newlines and all that, so I can put that in as the parsed note? the raw html is basically unreadable, and that's not how it actually appears on the page. I was going to try to see if a subsidiary page parser would work, but the Hydrus docs specifically say not to use subsidiary page parsers, so I didn't.
>>15122 I couldn't find any info on this in the docs. If I have a a ton of files on a btrfs file system and the data directory for hydrus on the same btrfs file system, when I import those files will they be copied with cp=reflink? I also wanted to ask about using hydrus over Tor. Is this workable or supported at all? Will hydrus retry failures gracefully? Is there some way I can automate the cloudflare captchas?
>>15132 The docs just say that subsidiary page parsers are hard to use for new users. They're not that complicated, it's just recursion. They can be used to swap between parsing json and html. Is that what you want? You're receiving a json response which includes raw html? I've used subsidiary page parsers to deal with that.
>>15133 >I couldn't find any info on this in the docs. If I have a a ton of files on a btrfs file system and the data directory for hydrus on the same btrfs file system, when I import those files will they be copied with cp=reflink? Unfortunately, Hydrus does nothing about that. Make sure you NEVER start hydrus with the .db files on btrfs, especially not one with compression! >I also wanted to ask about using hydrus over Tor. Is this workable or supported at all? You can set a proxy globally, but not for different sites. >Will hydrus retry failures gracefully? I don't think it can do so automatically. I disable subscriptions and watchers in a menu before enabling Tor.
>>15136 Where do you disable them, and is it different from pausing them. I've had some internet issues lately and would like to just turn them off until it improves without losing track of which ones I had paused before for other reasons.
>>15137 network -> pause
>>15138 Danke
>>14858 >The idea of perhaps one day using a headless browser as a web driver is not out of the question. I would very much like to see an attempt at solving captchas automatically. IMO a full-fat browser in a docker container would be best.
>>15093 >>15111 >>15109 Sorry for the trouble, and thanks for the fix! Check the v574 release post >>15122 for a routine to fix bad posts, just change '7 days' to '14 days' or whatever you need. >>15094 Sure, I will add a quick choice dialog to the 'check now' button. >>15095 >>15105 Sorry for the trouble. One thing I know I have screwed up in the recent URL rewrite is the parameter alphabetisation is sometimes not working correct. I don't know if reddit cares about that, but if it does, that could give a 403. You could double-check, if you know how, by checking the related URL Classes here and seeing if they have 'alphabetise GET parameters when normalising' on. I am hoping to have this fixed this week. Let me know how things go here. If the discord guys figure out something I need to do, please make sure I don't miss it. >>15099 Sorry again that the program allowed this. It should not allow you to import itself any more. >Also, how should I tell Hydrus that the files are now in a different directory, so I don't have to mount the directories here and there? I'm not familiar with your exact situation, obviously, but a good trick if you have anything super complicated is just to: - shut the program down - move your stuff where you want it - boot the program The program will throw you a 'I couldn't find your files' dialog that lets you repair things by telling it the new locations, and you skip the need to do the internal migration. It isn't suitable for normal users, but it is fine for advanced users. Make a backup before you do it, just in case something goes wrong. >>15102 Since danbooru is in the defaults, I'll see if I can figure this out myself. >>15106 >>15103 If your problem was with client.master.db, then once you have a backup secured here, you might like to hit database->check and repair->repopulate truncated mappings tables. Might be your cache (in client.caches.db) survived intact, but the actual storage tables took a cut in any clone/whatever you did to recover.
(7.22 KB 512x131 twitter_2023-12-19_2.png)

>>15109 >Will this new Twitter downloader ever be able to bring back subscriptions? There is a hacky search still open on the twitter parser a user added to the hydrus discord, but you have to do cookie magic I think and it only gives like the latest 20 tweets so you have to set up a subscription with a rapid search timer. Pic related if you want to try, although I don't know the details. I will not add this to the defaults because of the caveats--it just isn't appropriate for normal users. Also, I learned you need $5,000/month to get proper twitter search API access these days jej. >>15110 Great, please work through the getting started guide and, once you feel confident about things, let me know what was confusing. Feedback from new users is always helpful. >>15112 I actually changed how hydrus loads truncated files recently, so if you update to v573 or so at least, I think many of these will be fixed. If any are still broken, please send them to me so I can try on my end, either by posting them here or posting URLs or emailing me or whatever. Since you have several of these, please collect some together into a page, and when you update, please test them each in turn. I have notes that the 'load truncated images' mode was originally turned off because it could case a complete hang, so since you have several examples, please let me know if they load without massive issues in the newer version. >>15120 >>15121 Ok, thanks for the feedback. I will keep working on improving stability, and there are two specific things that may help you soon: 1) I plan to reduce all file loading jank in the media viewer in a significant way in the next month or two. I'll be adding a better 'loading' state, which you will see most in mpv, where if a file can't be loaded yet, the media viewer will show a pleasant 'loading' indicator instead of janking out with an UI hang. 2) I will put out a 'future build' this week or next, and the Windows version will have a much newer mpv dll. Might help your issues, so if you are Windows, please test it and let me know. >>15125 Hell yeah!
>>15128 Damn, what a shame. "Error: database or disk is full" will almost always refer to the disk. It may refer to the disk the db file is on, but it can also refer to your temporary path's disk, usually your system disk. Some versions of Linux have funky ramdisk temp folders that have like a 768MB limit, which can cause this error. Do you think you might have something similar, or, more likely, just don't have ~24GB+ available on your system disk? I think you should probably just roll back to your backup, since it isn't so far long ago. Check 'help my media files are broke.txt' in the db dir for info on how to resync it to your current client_files file storage. >>15129 >>15131 Yeah it is dumb. It mostly works on count atm. If you exactly type one of those results, I think it'll snap to suggesting the sibling in the first or second result, but it doesn't do that sort of re-ordering until it is more confident on what you are typing. I don't like how spammy this stuff gets with siblings and parents, I am still thinking about how to exactly improve the workflow. I might just add a setting somewhere that is like 'don't suggest "worse" siblings', idk. >>15132 >>15134 In the html formula panel, does switching to 'string' for the content to fetch, pic related, help? If you grab the kind of 'paragraph' of what you want, and then hit 'string', my code is supposed to do its best of grabbing the visible text content. I think it does vaguely smart newlines too, but let me know if and how it fails and I'll have a think about how I can fix it. As >>15134 says, if you have html inside JSON, then yeah try a subsidiary page parser. Grab the html content for the subsidiary parser, and then switch that guy to an html parser and then fetch the 'string', and, fingers crossed, it'll all work. PRO TIP: Make sure you have some good test data loaded in the dialog before you edit the subsidiary page parser. >>15136 >>15140 Thanks, I will update the help to talk about this. >>15141 Yeah, I like how jdownloader does that. We'll see how things go, since this sort of tech will be very complicated, but I'm more and more open to having hydrus ask other programs to do work while also making itself available to other programs via the Client API.
>>15143 >>15112 Unfortunately I haven't kept track of all the images that have displayed the issue in the past to be able to check them all, but one in specific that seemed to develop the problem over the last few days has seemingly been fixed, and I have also been able to post it in places without issue. If I happen to notice the problem persisting somehow even past this point, I'll be sure to mention it again and also bring the affected files. Thank you!
>>15142 ill re run the recent 403's on the subs and report back when a potential fix is implemented. and sadly i'm far too dumb to really find that out on my own, >>15143 Ill hold off on trying to build from source for a while and see if anything you do helps, realistically, the crashes are more just annoying than something I can't deal with... granted when mpv takse the client its a bit more than annoying. I went thought the log and I found this line v574, 2024-05-09 08:34:25: [MPV error] main: Too many events queued. now, just to note, I didn't have mpv kill the client since the client moved to the new pythion version, but that error is the one that would take the client out before, it didn't matter if I went though the files as fast as possible, or if I went through them and watched them through before going to the next. now, saying that, before the client would just die, then you did something and the client in taskmanager would suspend and dump all its ram and then close. this is also when the error would come up. its possible that the error before was fixed, and then a second error came in, or its the same error and the error was just hitting long, I have no clue on my end
>>15144 >>>15128 >Damn, what a shame. "Error: database or disk is full" will almost always refer to the disk. It may refer to the disk the db file is on, but it can also refer to your temporary path's disk, usually your system disk. Some versions of Linux have funky ramdisk temp folders that have like a 768MB limit, which can cause this error. Do you think you might have something similar, or, more likely, just don't have ~24GB+ available on your system disk? According to https://www.sqlite.org/tempfiles.html, sqlite3 would probably use /var/tmp or /tmp. My /var/tmp is on /, and I have noticed its free space fluctuate between around the current 17GB and 1GB free space in the recent days. I will try forcing sqlite3 to use the same filesystem then, thanks for the hint.
>>15144 >I don't like how spammy this stuff gets with siblings and parents, I am still thinking about how to exactly improve the workflow. I might just add a setting somewhere that is like 'don't suggest "worse" siblings', idk. I guess I'd want the PTR siblings to be preferred on the PTR service, my own siblings on my tag services, and not care on the downloader services.
>>15144 >>>15136 >>>15140 >Thanks, I will update the help to talk about this. I thought it was already there, and I had read it, as well as some advice elsewhere about disabling compression, but I disabled compression in fstab only, forgetting that I was mounting the file system with a script that also enabled it. So the file system still broke. XFS survived a few months so far, but it doesn't have fast snapshot features.
>>15136 >>15140 >Make sure you NEVER start hydrus with the .db files on btrfs, especially not one with compression! Okay I'm kinda nervous now. I'm been using Hydrus on Fedora for over a year, and my understanding is that Fedora uses btrfs with compression as its filesystem. Does that mean my db is secretly corrupted now or something? Everything looks fine and Hydrus hasn't mentioning any errors and my filesystem doesn't seem broken? What am I supposed to do to prevent a disaster?
>>15136 >>15140 >>15144 I've been running on btrfs with compression for awhile and haven't had any issues. I know DBs on COW filesystems don't have great performance but this is a local application with one user not some enterprise DB with 100k transactions a minute. How exactly did your fs break? apparently I am not alone >>15151
>>15140 Do you have a source for that? None of the information I've seen mentions databases and btrfs interacting poorly (beyond CoW being a poor fit for databases since they change often), and I've been running hydrus for about a year and a half on a btrfs filesystem with zstd:14 compression with no problems.
>>15152 to follow up on this compsize reports I'm saving about 60gb from compression.
>>15151 >>15152 >>15153 Okay, I can't say it's only because I put hydrus on it, but btrfs has had bad reputation for years and I ignored small issues like some damaged files in one directory, but unexpected computer shutdowns are what former btrfs users say corrupts it, and hydrus creates a lot of activity that can co-occur with them. I created that btrfs on an SSD in 2018 with Debian. https://wiki.debian.org/Btrfs recommends not enabling compression. https://hydrusnetwork.github.io/hydrus/database_migration.html#launch_parameter says "In the best case (BTRFS), the database can suddenly get extremely slow when it hits a certain size; in the worst (NTFS), a >50GB database will encounter I/O errors and receive sporadic corruption!" I don't remember what kinds of compression I used. I think I chose the fastest one. My computer is buggy, sometimes CPU errors and unexpected reboots happen, and recently a bit flip made another btrfs read-only. The db damage discussed above may also have been caused by that. I had several other programs using sqlite on the file system. The non-hydrus dbs were probably not bigger than 5GB each, but one or two of them (RSS Guard) were active half of the time. I had been using hydrus with PTR (tens of GBs) for a week or two in October, and I guess I may have moved it to that btrfs or it was just because I started three active dbs at once (I suppose I meant hydrus and two RSS Guards) when there was maybe 8 GB free. An unexpected restart may have happened that day. After that three-db activity, the partition started showing "no free space" and files were becoming unreadable (and maybe invisible) from use until the next remount.
>>15151 >What am I supposed to do to prevent a disaster? I changed that SSD to XFS, and I keep only Hydrus db and thumbnails, one RSS Guard, Anki (that I backup all the time anyway), and a swap file on it. Copying with reflink is slow, although that could be simply because Hydrus' files are uniquely large (it's even slower on an HDD). I don't like it and these posts here could make me consider changing the SSD back to btrfs, if it wasn't for that bit flip.
>>15144 >I think you should probably just roll back to your backup, since it isn't so far long ago. Check 'help my media files are broke.txt' in the db dir for info on how to resync it to your current client_files file storage. 233 orphan files were moved. After "repopulate truncated mappings tables" (Rows recovered: 3,348,756), only two of them have URLs and none have previous tags. Some of the files are from March 9, the day after the backup. But I did many tag changes in the past week, and the March 3 db also had the damage. Should I try placing some older client.cache.db files into the db dir temporarily and repopulating?
>>15164 After exiting with some shutdown maintenance, client.mappings.db is 160 MB. PTR has 31,167 parent pairs. How to find out what's missing? In "In client.caches.db, there is a cache of your tags for the files currently in the client.", "the files currently in the client" is a little unclear to me. Does that mean all the files in hydrus_files/ (and not just the files in the recently open tabs)? So if I don't want to use the PTR anymore, I could just pause its updates and happily keep using the fast and safe 160MB client.mappings.db instead of a 55GB one?
>>15155 >>15159 >>15161 So you're using faulty hardware, with unstable power, and ran out of space. Your documentation source for this says NTFS, not BTRFS, might cause corruption with large databases. I have bad news for you: any sqlite database is going to get corrupted under these conditions and your current one is probably damaged right now but you just don't know about it because XFS has no way to detect corruption. Does your computer even pass memtest? This is 100% a skill issue on your part and going around saying using btrfs / compression can "ruin the filesystem" is just retarded FUD. P.S. If this is also you (and based on how many times these were deleted and reposted, I think it is): >>15128 >>15164 >>15165 >>15166 Then congratulations! You're running into the same problems as before which are due to your failing hardware and inability to manage disk space. They have nothing to do with either BTRFS or XFS except that with XFS the problems don't become apparent right away.
>>15167 >Does your computer even pass memtest? This Anon is right: there is a bad bit in one of my RAM modules now. It wasn't the hardware issue I was talking about, as memtest passed last year, but it could be related. Please ignore what I wrote about using Hydrus on btrfs in >>15136 and >>15140. >and inability to manage disk space That shouldn't cause damage like that, and the /var/tmp issue is sqlite3's fault for using a temporary directory few others use for files that large without a warning.
>>15159 > or it was just because I started three active dbs at once (I suppose I meant hydrus and two RSS Guards) when there was maybe 8 GB free. An unexpected restart may have happened that day. >After that three-db activity, the partition started showing "no free space" and files were becoming unreadable (and maybe invisible) from use until the next remount. >>15168 > That shouldn't cause damage like that, and the /var/tmp issue is sqlite3's fault for using a temporary directory few others use for files that large without a warning. My brother in Christ, I do not know how I can make it clear to you but *everything* is wrong with this. May god have mercy on your databases. I SKILL S U E
>>15169 >I SKILL SUE What did he mean by this?
When I export files by dragging and dropping, the filename usually comes all mumbled - it's the file hash IIRC, and it's kinda ugly (and it stands out when posting on imageboards) I'm reading the manual trying to figure out a way to auto-add filenames to my files by parsing the last part of the URL and putting that in a custom namespaced tag "filename:". Do you guys have any pointers? I usually download files either by copiying and pasting direct image URLs from 4ch/etc, twitter post URLs and booru URLs/search watchers. I'll need a custom config for each of these domains, is that correct?
(141.97 KB 477x506 X61uhFi.png)

>>15171 >I SKILL SUE
>>15175 >that image embarrassing.
Hello Hydev I have some images that aren't thumbnailing properly for whatever reason. Funnily enough it's by the same guy who made this >>14873 borked image. Attached are the images which are mildly nsfw. As an aside, is there a way to tell which hash a url corresponds to? I was going to post links to the images but I have a bunch of URLs attached and I don't know which matches the hash. I think that might be useful as a feature but it might not be useful at all so IDK.
>>15172 file > options > gui you can set a pattern for filenames. if you want to create a custom "filename:" namespace for all your files that changes based on their source, that sounds like a job for the client API.
The danbooru downloader doesn't use the entire datestring. It only gets the date and ignores the time even though it's available. Here is a content parser that gets the entire date and time. [30, 7, ["post time", 16, [27, 7, [[26, 3, [[2, [62, 3, [0, null, {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]], [2, [62, 3, [0, "time", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "datetime", [84, 1, [26, 3, [[2, [55, 1, [[[14, null]], "2018-01-09T15:48-05:00"]]]]]]]], 0]]
(592.00 B 203x22 2024-05-14_110930.png)

(89.68 KB 2000x2037 stars-png-612.png)


>>15143 >>15110 Alright, just tried it quickly yesterday and today, first impressions : flaws: >no shortcut to manage tags, reachable in two clicks. It's the main use you'll have, no? Put a shortkey. Ctrl + T ("t" for tag). >when managing the tags the window appears on the preview thumbnail on the left and hides it. You usually tag what you see. Make it appear on the thumbnails on the right, not the big thumbnail on the left which we want to see to name all the tags. We can't select the thumbnails on the right anyway once the window is open. >Ratings only allows you to make it favorite or not. A 5 stars notation would be way better. Sometimes you like an image and would like to give it 4 stars, but you don't consider it a favorite for exemple. I suggest 5 stars you can click on, like the Windows system for your music files. You click on the star, the stars on the left become yellow. A warm yellow orange like in picrel. Please allow the fractions, like 1.5 star, 2.5, 3.5 etc. Clicking is easier than entering numbers I think. >I have 75k images and when importing them, Hydrus creates a temporary file that goes on C disk, but you don't usually have as much place as on your D disk. I have 50gb free on it and Hydrus doesn't use it... The result is that the import is blocked at 25k files (1/3 so) and now my C disk is full I can't download anything anymore... >Also what are all those fb folders in C:\Users\xxx\Hydrus\client_files\fb0 ? There is no other way than copying all the files? It means you have to double the weight of all your pictures files in order to tag them with Hydrus? It's a huge problem, dude! And even if it's the only way you found then at least we should have the choice of the place to put it on the large D disk. Are they temporary? I hope so. If not it's catastrophic. >The UI is overcomplicated overall. You have to click many times like go in one of the many cases, type the tag > right-click > and then search in different menus "opens a new page" in order to see the pictures that contain that tag. No way to directly see the favorites. We should have a star we click on and bam the favorites appear. We should have a permanent case where we can type a tag and bam the images containing that tag appear, or even better a permanent list of our tags on the left we could click on. Crossing the tags would be great too, I don't know if it's already possible, but how would I know, the interface is messy anyway, there are too many informations, too many menus to dig in just to pull a simple request. >When scrolling I would personally substract one line. What I mean is that if a picture is that if a thumbnail is a the bottom of the interface, and you scroll down one unit, the picture is now not seeable anymore and it's confusing. Imo when scrolling, the bottom line should become the top line, but not go offscreen. The biggest issue is copying all your files on the C disk without asking. It's a no no in my book. qualities : >you can see all the files as thumbnails on the right. Basic and logical, but Exiftool didn't think of it... (but Exiftool doesn't recreate your entire mediatheque on your C disk without asking) >The thumbnails don't lag when scrolling. You can move the scroll up and down, even with 25k files, they all load in 1sec. It seems you can download pictures with the tags, which sounds useful, but I didn't use that function yet. I'll try in the next days. Can we download the tags from a picture we already own? Sounds complicated, it would have to use something like saucenao, but only for Danbooru. What does happen if we move files? Will they still appear in the tag searches etc? My criticism can sound harsh but you made a geat job and I think you're the only one proposing such a software to easily tag your pictures. Is it a recent project? I feel a lot of potential in it if the adressed issues are fixed.
Also >right-click>manage>ratings : the window appears in the top left corner and is tiny. Appearing in the center would be better imo. Is there a way to sort files which don't have a tag? To differentiate files you tagged and those you didn't, instead of checking one by one.
>>15181 Most of this is stuff hydrus already does or has. Please read the docs. >no shortcut to manage tags, reachable in two clicks. It's the main use you'll have, no? Put a shortkey. Ctrl + T ("t" for tag). F3 by default. Configurable in the "shortcuts" settings in the file menu >when managing the tags the window appears on the preview thumbnail on the left and hides it. You usually tag what you see. Make it appear on the thumbnails on the right, not the big thumbnail on the left which we want to see to name all the tags. We can't select the thumbnails on the right anyway once the window is open. You can configure how windows are positioned an sized in the settings >Ratings only allows you to make it favorite or not. A 5 stars notation would be way better. Sometimes you like an image and would like to give it 4 stars, but you don't consider it a favorite for exemple. I suggest 5 stars you can click on, like the Windows system for your music files. You click on the star, the stars on the left become yellow. A warm yellow orange like in picrel. Please allow the fractions, like 1.5 star, 2.5, 3.5 etc. Clicking is easier than entering numbers I think. You can create and delete ratings services in services>manage services. You can create a "numerical rating service" with 5 stars exactly how you want. You will then be able to click them in the media viewer and manage ratings dialog. >I have 75k images and when importing them, Hydrus creates a temporary file that goes on C disk, but you don't usually have as much place as on your D disk. I have 50gb free on it and Hydrus doesn't use it... The result is that the import is blocked at 25k files (1/3 so) and now my C disk is full I can't download anything anymore... you can set the temp directory with a launch flag. See the docs. >You have to click many times like go in one of the many cases, type the tag > right-click > and then search in different menus "opens a new page" in order to see the pictures that contain that tag. What? You just type the tag and hit enter to add it to the search. >>15182 >right-click>manage>ratings : the window appears in the top left corner and is tiny. Appearing in the center would be better imo. Again, you can configure how these appear in the settings. >Is there a way to sort files which don't have a tag? Yes, there is a system predicate for number of tags and you can also sort by number of tags.
(109.03 KB 1280x720 Karen7.jpg)

>>15181 >Alright, just tried it quickly yesterday and today, first impressions : >flaws: This is hilarious. KEK
In 'migrate tags', when you export to a 'Hydrus Tag Archive', you have to choose a file name. Maybe it's silly to want to export everything, but if anybody does it, here is a feature suggestion. The variables affecting the export are: - content - source - filter - status - files - tags - hash type 'content', 'source', 'hash type', 'status filter' are very simple. 'files filter' can also be simple. 'tags filter' is optional. I think all the simple ones can fit in the filename, so a pattern editor like in 'export files' > 'filenames' would make it easier to choose. Related to 'export files', the default 'export path' should probably not be straight into the hydrus_db directory with all those important files.
>>15181 Some of these were answered, but I'll add: >Also what are all those fb folders in C:\Users\xxx\Hydrus\client_files\fb0 ? Those are the folders where you files are stored. The "f" is for file and the other two characters are start of the image hash. Then there are folders that start with "t", which are for thumbnails. This is probably for organization/optimization purposes. >There is no other way than copying all the files? It means you have to double the weight of all your pictures files in order to tag them with Hydrus? It's a huge problem, dude! Hydrus isn't a program that tags images in an existing folder structure, it's a program that's supposed to replace your folder structure. You're supposed to ditch the old way of organizing files for the new Hydrus way. Why that is is explained in the docs in the FAQ I think. >And even if it's the only way you found then at least we should have the choice of the place to put it on the large D disk. You can. Simply move your db folder to where you want and start the program with the "-d="D:\path_to_the_folder"" flag, or add it to the shortcut. There is also a way to migrate files from within the program, I think it's in database > move media files... >Are they temporary? I hope so. If not it's catastrophic. Nope, that's by design as explained in the first point. >No way to directly see the favorites. We should have a star we click on and bam the favorites appear. There's a system:rating predicate that lets you filter favorites. Though being able to see which files are favorited on the thumbnails would be nice, which was requested before. >We should have a permanent case where we can type a tag and bam the images containing that tag appear, Not sure what you mean here, that's exactly how it works? >or even better a permanent list of our tags on the left we could click on. This was also requested and the dev said he wants to add some kind of tag wiki eventually I think. >Crossing the tags would be great too Crossing the tags? >When scrolling I would personally substract one line. What I mean is that if a picture is that if a thumbnail is a the bottom of the interface, and you scroll down one unit, the picture is now not seeable anymore and it's confusing. Imo when scrolling, the bottom line should become the top line, but not go offscreen. There's a setting in options > thumbnails that changes how much the screen moves when you scroll. Not sure if you can replicate the exact behavior you want, but you can at least somewhat emulate it. Personally I use 0.1 as a value, it makes me scroll more, but at least the scrolling is "smoother" and I don't get lost easily.
>>15186 >Nope, that's by design as explained in the first point. second point*
>>15186 > Simply move your db folder to where you want and start the program with the "-d="D:\path_to_the_folder"" flag, or add it to the shortcut. There is also a way to migrate files from within the program, I think it's in database > move media files... Just to add to this. In the case that the drive you installed Hydrus to is an SSD (doesn't have to be your C drive) and the larger drive is an HDD you should not move the db folder containing the .db files or thumbnails to the HDD but just move the actual media files using the "database>move media files" option in hydrus.
my eBay downloader
I had a great week. I improved quality of life and fixed some bugs. The release should be as normal tomorrow.
>>15172 Yeah, just duplicate the "downloadable/pursuable url" content parser and change it from url to tag and add a namespace, then go to edit formula > string processing and add a string converter with regex substitution that gets rid of everything before the last "/". You'll have to do this for every downloader you use though. Also if a downloader gets multiple files, you'll have to use subsidiary page parsers to split your parser into multiple mini parsers for each file or every file will have the same filename.
>>15191 >>15172 Oh, I just noticed the direct image url part. I'm not sure if that's possible, since that doesn't use any parser. You could try making one, but I don't know if that would work. You'd need to make a new url class and then match a parser to it where you use the context variable formula type to get its own url which you use as both the filename and url to download. But that might just create a loop.
>>15184 well, a software must be intuitive and noob-friendly to be efficient too imo. That's why Blender is destroying Maya those days. Same perfs, 10 times easier to master and understand, you easily find what you search by yourself even without documentation. Also fuck you. >>15183 > Configurable in the "shortcuts" settings in the file menu >You can configure how windows are positioned an sized in the settings >you can set the temp directory with a launch flag. See the docs. Alright, nice. Done. >You can create a "numerical rating service" with 5 stars exactly how you want Ok, done. Lacks the .5 but ok. >What? You just type the tag and hit enter to add it to the search. Ok. I find it messy when you don't know, tho. >Yes, there is a system predicate for number of tags and you can also sort by number of tags. nice. Thanks for your answer. >>15186 >Hydrus isn't a program that tags images in an existing folder structure, it's a program that's supposed to replace your folder structure. You're supposed to ditch the old way of organizing files for the new Hydrus way. Wow. It's a big deal. You know, when someone recommended it to me, he just said "yeah you can tag your pictures like on Danbooru." Also this information should be precised like in the first line of the introduction on the homepage, and not in the FAQ, because it remodels your entire conception of how you use your computer everyday. Today everybody thinks of files in terms of folders filled with other folders and files, like a tree with the roots being the C and D disks in "computer". If you tell them "nah, with Hydrus You just move THE ENTIRETY of your files in one big folder in order to tag them" then at least it should be in the very first line of the presentation because it's a big deal. Also I get that a lot of things I mentionned are answered in the doc but don't forget that people don't start using a software by reading 20 pages of documentation and FAQ. You first try to use the software, and when you fail at being able to do something, then you try the documentation and FAQ. So yeah when you open up Hydrus, you just think "hey I'm gonna tag those images", you see import and click on it and then suddenly everything is moved to an obscure folder without asking and put in disorder. Am I the only one to find this super hyper questionable? Also I saw that the case "delete it when successful" comes crossed? wtf You spend hundred of hours for years to class things in folders only so everything get moved in disorder? That's fucked up. Other than that, great tool, yeah.
>>15186 >Not sure what you mean here, that's exactly how it works? ok I understand. As I couldn't import the whole folder because what I mentionned above about C disk etc, the import isn't finished and so the left menus don't propose to filter tags. Seriously am I the only one to have a heavier picture folder than what's free on my C disk? I can't be the only one. I'm sure some anons must have some 1tb pictures folders. One strong point of Hydrus is the nummber of supported files extensions. epub, .docx and .pdf would be nice too but it's honestly very nice you can tag 99% of your pictures, videos, gif and audios. Exif tag can only apply to jpg and mp4, which means you can't even tag PNG and webms, which are supra common. >Crossing the tags? Yeah, like refining the search with many tags. I found out, you can. >There's a setting in options > thumbnails that changes how much the screen moves when you scroll. Ok nice I'll do it, thanks. >>15188 Alright, thanks. The software is overall very adjustable, which is great, but I keep thinking it could be more noob-friendly from the start.
>>15193 >everything is moved to an obscure folder without asking and put in disorder Only if you told Hydrus to delete the originals when importing. If so, that's your problem and your fault. Restore from your backups! .... you do have backups right? Right??? Surely > Am I the only one to find this super hyper questionable? Probably? This is super standard and basic database behavior. It's a database it doesn't matter to a user at all what directory the files are stored in. All that matters is that it can find them for the user. That's the whole point of a database is to consolidate. >You spend hundred of hours for years to class things in folders only so everything get moved in disorder? That's fucked up. Again, restore your backups (if any lol), and re-import things. There's a ton of options for auto-tagging imports based on file and folder names. The only reason all that work would have gone to waste was because you went "herp derp, push button!". >You first try to use the software, and when you fail at being able to do something, then you try the documentation and FAQ. Which is exactly why you're having problems. Life pro tip: Always RTFM. Read the fucking manual. Not just with software, with everything. The manual is there to teach you so you don't fuck up. It's not there to unfuck up your shit. Nothing wrong with asking for help, but all your problems were easily preventable with a few minutes of reading.
>>15194 You seem to still have some misunderstandings. The import page that opens when you import files will always look like that. It won't changed into one where you can do searches, you need to open a search page for that. >epub, .docx and .pdf would be nice too All of these are supported already. There is a page on the docs site listing every supported format.
>>15181 LOL. LMAO, even.
https://www.youtube.com/watch?v=Auc5wHXPQaw windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v575/Hydrus.Network.575.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v575/Hydrus.Network.575.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v575/Hydrus.Network.575.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v575/Hydrus.Network.575.-.Linux.-.Executable.tar.zst I had a great week. I made a bunch of small improvements, and I am gearing up to start duplicate auto-resolution. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights The new tag autocomplete 'children' tab now sorts by count and clips to the top n (default 40) results. You can change the n under options->tags. This takes a bit of extra CPU to figure out, so let me know what performance you see in IRL situations. When you ask subscriptions to 'check now', if there are a mix of DEAD and ALIVE subs, it now asks you which ones you want to check. Should be easier to resuscitate DEAD subs en masse now. More twitter URLs are supported by default. You should now be able to drag and drop a vxtwitter or fxtwitter embed URL on the client and it'll figure it all out. I fixed the (non) alphabetisation of GET URL parameters in URL Class settings, which I broke by accident a few weeks ago with the new URL encoding tech. If you have a super advanced downloader that relies on parameters being in a certain order, please try again and let me know if you still have any issues. duplicate auto-resolution This week I also put time into planning out the duplicate auto-resolution system, which will optionally and highly-user-configurably resolve the easiest duplicate pairs without human input. I am happy with my plan, and I have sketched out a skeleton of code. The whole problem looks simpler than I expected, which is good, but as always there will be a lot to do. I will keep chipping away at this in the coming weeks and will roll out the features in waves for advanced users to try out. The first version will resolve jpeg/png pixel duplicate pairs. next week A bit more auto-duplicate resolution work, and I'd like to see if I can reduce some of the juddery latency when you try to look at media, particularly videos, while the client is busy doing imports in the background.
>>15198 when it comes to duplicate auto resolve, how far is it going? I assume pixel perfect and png versions of jpegs are going to be auto resolved, but how will it handle lets say something like an image set vs noise, I have seen a small change in lets say panties being there or not show up in the... I just opened up a new duplicate tab and went 0 distance and got an image where there is an entire arm that's now in the way, and in the same image set I have one that's just slightly more noise is there a way to hit that like a concentrated area of high difference vs an entire image of low difference/noise? hell, to that extent would it be possible to have that set images to potential states waiting for human inputs? a while back I had an image set where there were something like 130 images total, realistically I wanted the best of the same images and only some of the variants, this set took a stupidly long time to parse because of manually marking them related alternates, rather than dealing with the duplicates. so would it be possible to have the auto resolve set images to a non binding/permanent file relation that could be filtered out from manual checking? at the very least in my case, I want to kinda veg out to a podcast while doing a/b of duplicates, not needing to constantly look entire images over to see if there is something different enough to warrant keeping both. that way I could go though the potential alternates later on once the more egregious crap is dealt with.
>>15198 Ok, just did a test, the reddit searches are now correctly parsing it the good, it works the bad, I cant just retry ignored, I have to go in and manually tell it what to reparse. I do have a potential fix/way to make this easier that shouldn't be too hard, at least going forward, would it be possible in the logs to have a column I think in-between the # and source that just puts the version of hydrus in there, that way if something goes wrong with a version, its a clear mark as to how far back we need to redo things.
The, >on normal checks, get at most this many newer files option is misleading for subscriptions is misleading. It actually means at most, this many new urls. So if you have a pixiv subscription, and you leave it at the default 100 "files" setting, it's very easy to suddenly download 1000+ files from under 100 urls like I did today since dozens of files can be posted per pixiv url. Would it be feasible to change this so it actually checks the file download count, and not just the url count? Ideally in my opinion if it hit the limit in the middle of a url, it gets the rest of the files from that url before stopping?
>>15198 >The first version will resolve jpeg/png pixel duplicate pairs. Cool! This is what I've been waiting for. Anything 100% pixel duplicate.
>>15198 Forgive me if you've talked about this already, but for the duplicate && auto duplicate filter, would it be possible to get some exclude rules based on filetype? As a practical example I've had pixel perfect dupes where one's a PNG and the other's a PSD. In this case I'd want to keep both so removing either would be bad. Although copying tags across both would still be useful.
>>15202 I agree that the wording is confusing, but I actually prefer the current (in practice) behavior of it being post urls. If you (Hydev) are going to change it, could you leave the current behavior as an option?
Just found out about Hydrus. I am not sure if it covers my usecase. I want a way to manage my files by tags. The text files I handle with git and upload them. But git is not great for binary files. Basically I would dump all image, video, audio, document (PDF, epub, docx, odt) files and manually tag them, so I can find them quickly instead of navigating nested folder structure. Would Hydrus be good for that usecase?
>>15206 From the sound of it, that's exactly the point of Hydrus. It's to use tags for organizing files so you don't have to use folder anymore. It also has a downloader system, but it's optional so you can ignore it if all you want it tagging local files that you already have
>>15206 >Basically I would dump all image, video, audio, document (PDF, epub, docx, odt) files and manually tag them, so I can find them quickly instead of navigating nested folder structure. So long as you can remember your tags and these aren't files your regularly modify, go for it.
Any PTR jannies here? What's the consensus on tags like danbooru id:123456, gelbooru id:2468, or misskey user id:qwerty12345? I just ran a massive import from hydownloader with new settings and it got all of these but I've never seen these on the PTR before. Hydownloader gets these for basically every site now but I'm not sure how useful they are.
Elon just changed Twitter's domain to X, which made the old links I archived unrecognizable by Hydrus Companion. Is there any way to change all old links from Twitter to X en masse?
>>15211 >Elon just changed twitter.com to x.com Great, that probably just broke my 200 twitter subscriptions
Could we get an option to temporarily disable sidecar exports? In case you want them off temporarily but don't want to erase them.
>>15206 >Would Hydrus be good for that usecase? Yes but with caveats. Hydrus is as good as the tags you type, which means it excels at laser focused searches. That said, it is very bad at the feeling of "discovery", or fuzzy searches, you experience when browsing folders, so it is very likely some files not so well tagged, or that you forgot the exact tag you used a year ago, will be lost forever among a sea of tags. Then keeping in parallel your original folder structure plus files intact may be advisable, more yet if you consider that dealing carelessly with the database has catastrophic consequences, if so, then you will able to recreate the database from scratch any time and with any version of Hydrus, specifically if you keep together the original files with their sidecar tag files.
>>15213 It's an annoying workaround, but you could export the sidecar settings to clipboard, delete them, export, then import back from clipboard.
Welp, I appear to have messed up my Sankaku settings, now only hydownloader works for grabbing stuff. I tried following the CAPI-V2 Auth settings in the Presets and Scripts repo but I must be missing something because it doesn't work. Does anyone have a working Sankaku setup and would you be willing to explain how you got it to work?
>>15219 I have done this with another site. You make the parser pick up the folders as sub-galleries and all the files. Let it run and you'll end up with all the files and the error of "This url class is a gallery" or something. Then you go and either remove the url class or change, and retry all the file's links.
I think I found a bug. It seems like the search bar (ctrl+p for me) doesn't find pages of pages. As in, if a page of pages has a certain name, it won't show up, but a file page or downloader page with the same name will show up.
>>15146 >v574, 2024-05-09 08:34:25: [MPV error] main: Too many events queued. Ah, yeah, this may explain things. MPV creates its own event loop on a new thread, and the python mpv library is basically passing my Qt events and the MPV signals back and forth. I can see how a halted thread for whatever reason (usually my shit code, but it can be for all sorts of reasons like the GPU sperging out) could cause a clusterfuck here. I will keep working on making this stuff work smoother, and I hope this week to have nicer mpv interactions. Please keep me updated on any changes your end. >>15147 Note for hydrus itself, you can also run it with --temp_dir to do the same thing within the program: https://hydrusnetwork.github.io/hydrus/launch_arguments.html#--temp_dir_temp_dir >>15148 Very good point, I wonder if I can tease the logic apart to make this happen. >>15164 >>15166 >But I did many tag changes in the past week, and the March 3 db also had the damage. Should I try placing some older client.cache.db files into the db dir temporarily and repopulating? In general, it is pretty dangerous to swap files about. Since these are close together, it is probably ok, BUT: make sure you make a backup beforehand, and do not let yourself get confused about which file is which when you do the swaperoo. You would want to swap in the caches db that has extra tags, run the 'repopulate truncated mappings tables' job, then exit and swap back to your real caches db. Do not allow any file imports during this special boot. >How to find out what's missing? If your PTR-syncing mappings cache is only 160MB, then I am afraid you have lots almost all your processed PTR stuff (it should be like 60GB I think). Hit up services->review services, and on the PTR tab hit 'reset processing->wipe all database data and reprocess'. It'll be cleaner to just start over than try to fix the stub of weirdness that has survived. EDIT: I read your next bit. If you don't want to use the PTR any more, you can just delete it under services->manage services. If you just want to keep syncing with its parents, then you can configure that under the 'review services' panel. >Does that mean all the files in hydrus_files/ (and not just the files in the recently open tabs)? Yeah, it means all the files you have in file storage, everything on disk, or in 'all local files'/'all my files', depending on which you can see and which are in most cases basically the same thing.
>>15178 Thanks. Both of these images render ok for me, so I wonder if they have been recently fixed. Try right-clicking on them and hitting manage->maintenance->regenerate thumbnail. Do they fix themselves? Your question about URLs and hashes is tricky. Hydrus usually has a direct mapping of hash to URL, but as I'm sure you know, merged mappings or bad sample image parses or just cloudflare fuckery can mess with it. I will always want the raw files, if possible, and sometimes it even helps to have them zipped since if they are busted they might not even upload, and on some platforms (e.g. Discord), cloudflare will optimise images sent via DMs, so it is nice to have the byte-perfect copies. As it is I get the same hashes as your filenames, so these were byte-perfectly transferred. If you ever need to send me a big file, catbox is a good answer, or for a really big file I like to use (and recommend) croc for a direct transfer: https://github.com/schollz/croc (only thing that leaks is your IP to the other user) >>15180 Thank you, I will integrate this into the defaults! >>15181 >>15193 Thank you for your feedback. Making UI more intuitive and clean has always been difficult for me, so knowing which things are least helpful is useful. Some of your points, like F3 to open manage tags by default, were answered the getting started guide, or if they were, could at least have been explained better. Can you walk me through how you found that guide? Was it good, unclear, difficult to understand, and/or annoying? It is ok to say you abandoned it or whatever. It sounds like I should be more upfront about the file storage system at the very least. I would normally expect someone to be confident in the program after, say, two weeks of use at least. My ideal is that they work through the getting started guide in phases, doing a bit here and a bit there and slowly getting more and more knowledge as they poke around the different systems. I wonder if I can make that more clear too. There are lots of little weird under-documented features like the new ratings stuff, or the FAQ questions about file storage--do you think I could be more upfront about that? >>15185 Interesting idea! Maybe I can init that default filename more cleverly, although I'd generally say that HTAs are normally quite a rare thing to interact with--have you recently had reason to make many of these, enough that you need a filename scheme to keep up with them? Can you talk more about that? >>15199 The dream is that this system works through a new object called the 'Metadata Conditional'. You can probably search through these thread archives and see me talk more about it. Basically this core lego brick of logic will take a file and test it for like 'filesize is > 100KB' and output True/False. The duplicate auto-resolver will have a little extra tech to allow for pair comparisons of 'file A filesize > 4 x file B filesize', but it'll all work on the same basic system. The first waves of the auto-resolver will work on this level, of expanding the MetadataConditional to talk about more types of metadata. It will probably reflect the existing comparison statements you see in the duplicate filter, the stuff like 'this file has an icc profile, the other does not' and 'resolution is much bigger for A'. You'll be able to set multiple rule tests like this and say 'if the files are exact match distant jpegs and file A is at least twice as big as B, then keep A and delete B, use default merge options'. I can't promise this system will do much clever stuff, but that's the point in a way--this thing is supposed to clear the easiest problems so you can use your human brain on the difficult ones. Stuff like 'these are costume/WIP variants' or doing other clever file-region comparisons is probably too complicated to automate with simple maths, but we might be able to come up with a 'this file is 27.2 jpeg artifacty' coefficient that we could plug into the above logic. In any case, if the majority of the drudgery is cleared for you and instead of 500 mostly boring decisions you have 40 mostly interesting, I'm hoping the whole system will just be nicer and more feasible to work with. I don't really know what the hell I am doing though (I have 700k potential dupes at exact match distance on my client, I have absolutely no idea how to think about that in human terms), so mostly I just want to get this framework working with some easy cases and we'll see what happens to the overall shape of the problem.
>>15201 I don't have proper multi-column list 'show/hide' tech yet, which I would want before I added a very specific column like that, and I'm afraid I don't really record which version of hydrus any particular action happened on. I'm not totally sure I understand what you need to do here to get it to 'reparse', but can you sort by a 'modified time' or similar, and then cut off everything that happened before you updated to version x? >>15202 >>15205 Ah, thanks. At the very least, I'll rename/tooltip this to be less confusing. Funnily enough, the 'file limit' you see on a normal downloader does work like this. However, can I ask why you would want to limit this? Normally, this value is meant for a backstop to catch a gap in a subscription sync (e.g. when a user uploads 60 new files for a search in one day) vs when a site goes crazy and renames all their URLs, making the subscription think the whole search is a gap. If you move a download from a download page to a subscription, this stuff usually doesn't come into play much, although I think I can see how pixiv, which has a weird way of fetching gallery pages behind the scenes, might cause weirdness here in the search. Anyway, if you can walk me through this more, that would be helpful. I'm not sure I've ever changed these values in my own client, for instance, so I'm clearly missing something from the workflow you are going for. >>15204 Yep, it will be highly customisable, including for filetype. This first version will be jpeg/png pixel dupes, hardcoded, and then I'll expose ever-more-complicated logical lego-bricks so you can make similar variants for your own situation. >>15210 I don't know how the jannies feel about them specifically, but I believe the user ids tend to be viewed as more useful than the post ids (they are sometimes siblinged to the actual pretty 'creator:' tag or whatever). I don't personally like post ids in tags--tags are for searching, not describing--and doubly so because hydrus already has robust known URL storage, but other people have other preferences and unusual API workflows going on. The janitors have not blanket-banned these namespaces, so I guess someone likes them! >>15211 >>15212 No way yet, but I would like this. We are really lacking en masse URL management. Your best bet right now is the Client API, but we are also fudging some URL Class stuff our end to try to make sure things still work no matter if you give a twitter.com, x.com, or vx/fxtwitter.com URL. If you don't have a custom twitter downloader already and aren't on 575, I recommend updating, since by chance I happened to add some of this URL Class stuff just this past week. >>15213 Interesting, sure I think I can. I'll stick it in file import options or something. Speaking of, I really want to get on top of a favourites system for the import options, so they are easier to set up. I hate so much of this workflow. >>15219 >>15221 Yep, sorry, I designed the system for boorus and gallery sites. Maybe one day we'll have regex multi-path components for URL Classes or something, but the URL Class dialog, as you've seen, is already a clusterfuck of settings. The whole thing needs an overhaul. >>15220 I do not, and I know very little about the downloader, but I'll say that every time I look at what the guys who are trying to download from sank have to do to get the right cookie or token or whatever, I die a little inside. I've just written sankaku out of my life pretty much, and perhaps I miss a file here or there but it has made my life a lot technically simpler. I've already got enough to be getting on with. >>15222 Interesting, thank you for this report. I'll see what I can do.
>>15225 essentially what happened with reddit was it will see 1 link, see it has multiple images, and then add those images into the log underneath it. hydrus parsed those links wrong, or added crap to it that broke it, if I went in there and reparsed everything going back to when this started, it would still try to reparse those exact 403 lings as well, and that really gums the system up so what I ended up doing was going though each one, deleting the 403 ones, and setting its parent to reparse. the use would be if a bug/problem like this comes in in the future, knowing which version of hydrus did it would make it easier to go in and reset everything to retry without doing it by time. its not something that's needed its more of a nice to have if it could be implemented easily.
>>15223 >>>15164 >>>15166 >then exit and swap back to your real caches db. Do not allow any file imports during this special boot. Thanks. I hope I did that. I haven't tried it with older cache files. >>How to find out what's missing? >If your PTR-syncing mappings cache is only 160MB, then I am afraid you have lots almost all your processed PTR stuff (it should be like 60GB I think). Hit up services->review services, and on the PTR tab hit 'reset processing->wipe all database data and reprocess'. It'll be cleaner to just start over than try to fix the stub of weirdness that has survived. >EDIT: I read your next bit. If you don't want to use the PTR any more, you can just delete it under services->manage services. If you just want to keep syncing with its parents, then you can configure that under the 'review services' panel. I still have 140000 pending tags there. I thought I'd let it be and add tags there in case I want to use the PTR again.
>15224 >>>15185 >Interesting idea! Maybe I can init that default filename more cleverly, although I'd generally say that HTAs are normally quite a rare thing to interact with--have you recently had reason to make many of these, enough that you need a filename scheme to keep up with them? Can you talk more about that? ._. It seems I was too optimistic with my old memories. There is little use for it. Recently I merged two tag services and forgot to migrate siblings/parents. I could probably get them from an old backup with the PTR. I had saved the current/deleted mappings, siblings and parents in December, and I thought it was a significant backup at the time. But it was not much. There is a lot of important non-tag data, and backing it up other than copying the entire database is impossible or really complex. It's a mistake to think that exporting only the useful files with sidecars is enough to get free from hydrus. Sidecars can only be used for tags, notes, URLs and timestamps, and you have to maintain a file format. Importing file and directory names as tags is lossy! Maybe that should be highlighted in the GUI. The database contains, from most important to least important (only what I could think of): mostly added by myself: - mappings for the current files, each tag service (I have 11, could merge into 4-5 if I don't care about ever migrating them to the PTR) - ratings - my siblings and parents (current) - my siblings and parents (deleted) - what are those, petitions? - notes - file services mostly automatic, necessary to track undertagged files: - URLs - import time - subscriptions mostly important if you have other backups or subscriptions: - mappings for deleted files - deletion reasons hydrus settings for sorting and related tags nice to have: - siblings and parents from the PTR too big to have many snapshots everywhere: - unused mappings from the PTR.
>>15229 >Importing file and directory names as tags is lossy! Maybe that should be highlighted in the GUI. I mean, even if you are careful to specify the levels, there may be more levels, case and whitespace. >The database contains, from most important to least important (only what I could think of): downloaders network limits tag presentation
>>15211 Hydrus Companion has url regex replacement rules you can set in the settings. This was already posted in the Discord a while ago but here: ``` /x\.com /twitter.com twitter\.com x.com ```
>>15229 > - mappings for the current files, each tag service (I have 11, could merge into 4-5 if I don't care about ever migrating them to the PTR) Can you explain how you ended up with 11 tag services, to give me an idea what i could do in future. Each for different boorus like the Hydrus site suggests? I have too little experience with this so excuse me, my hydrus has yet to be filled. The hydrus site says: "You might like to add more for different sets of siblings/parents, tags you don't want to see but still search by, parsing tags into different services based on reliability of the source or the source itself. You could for example parse all tags from Pixiv into one service, Danbooru tags into another, Deviantart etc. and so on as you chose." What does "tags you don't want to see but still search by" mean? Isn't the "manage tag display and search..." option already doing that? Blacklisting a tag there works to not display a tag and still search for it, but how would that example work with different tag services. Let's say we have a SFW and a NSFW tag service. If you activate the SFW tag service and search for a tag from the NSFW service, it doesnt let me apply the tag in the search bar. So that's why i probably dont undestand the logic behind it. Does it even make sense to seperate SFW and NSFW by tag service instead of just file services? Also i guess it is by design, that you cant activate two different tag services without having to activate the "all known tags" service, otherwise this would already be there? For example you cant activate SFW tags + NSFW tags together, only always one will work or "all known tags". Would it even make sense to allow it? Any ideas and explanations are welcome. Bug: Want to report a bug, that is somewhat rare. I activated the "close the main window to system tray" option. I found out, which might be coincidentally, that when i close the window by the "X" on top right, the program might freeze after trying to open it again from the system tray after i was away from the computer. It will open, but i cannot click on anything in the main window anymore at all, i can only minimize it to the task bar again and maximize back and so on. But inside the program nothing is clickable, also without any visual mouse highlighting. So i have to kill it in task manager. It does only happen rarely, i never turn off my computer and only go into "lock" mode on windows and not standby or anything. IF i close the program through "file" -> "minimize to system tray", it never froze once yet. Thats why i keep doing that, even tho the option ""close the main window to system tray" is still activated. I think it might be that opiton causing it so maybe Hydev can investigate.
Is there any danger in running only SELECT statements on the .db files?
>>15233 >Bug: Want to report a bug Another Bug: The new "how many tags to show in the children tab" goes from 40 default to 20 after you check the "show all" option, apply, and then opening the options again. If you put in 38 or so, then check the box, apply, and opening the options again, it wont go to 19 or so but stays at 20. Wanted to see if it halves the number for some reason but doesnt.
hi, i'm currently using the watchers to download 4chan threads , i have the 'filename' tag enabled. i would like hydrus to add all filename tags except the 4chan timestamp epoch filenames (ones that look like 1700000). is there a way to do this?
I had a great week. I cleaned some jank code, fixed some bugs, improved some downloader UI, and massively reduced file load lag when there are many imports going on. The release should be as normal tomorrow.
>>15224 Yep, regenerating the thumbnails worked, I never even thought to try it lol.
>>15238 >except the 4chan timestamp epoch filenames (ones that look like 1700000). "1700000" could be the image downloaded from https://derpibooru.org/images/1700000 The Unixtime ones are longer and are generated by userscripts. The 4chan ones are saved only in the URLs, not tags. If you still want to omit the names consisting of many digits, do this: network > downloader components > manage parsers > the parser > subsidiary page parsers > posts > content parsers > filename > edit formula > no string processing > add > String Converter > add conversion type: regex substitution regex pattern: ^[0-9]{13,}$ regex replacement: It should replace any filenames consisting of only 13 or more digits with nothing, so the tag will not be added.
Does the regex for content parser not support just capturing matched results? I only need to get a part of a string but couldn't figure how to do so. I'm currently using the round about way of deleting everything else instead.
https://www.youtube.com/watch?v=gWmECLnMKGk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v576/Hydrus.Network.576.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v576/Hydrus.Network.576.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v576/Hydrus.Network.576.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v576/Hydrus.Network.576.-.Linux.-.Executable.tar.zst I had a great week. The program should be less laggy when busy. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html improved file access latency I reworked the client file manager's locks to be more fine and sophisticated. When you have several importers working in the background, and particularly importers handling large files, the client will delay access to your files and thumbnails far less, and generally not at all. I have been thinking about the specific change I intended to make here for a while, but this stuff can be tricky and I wanted to think and be careful how I did it. All tests so far are proving good, but let me know if you run into any trouble. Secondly, when you do have file access lag, the mpv player now waits more politely. It won't hang the UI any more; it'll just sit on a black screen until the video file is ready. You can roll off to another media and come back, close the media viewer entirely, whatever you want. misc Managing 'import options' in gallery downloaders and thread watchers and their sub-downloaders should be a touch easier this week. I have replaced the static 'set options to downloaders' buttons with a dynamic button that appears only when the current selection of downloaders actually differ with what the main page has set. That pages and their downloaders have unsynced options has always been a complicated concept that I have had difficulty highlighting and explaining, so I hope this improves things a little just by osmosis. I also fixed a bunch of holes and bugs in the system. Thanks to a user, we now have support for .doc, .ppt, and .xls Microsoft Office formats. A whole bunch of old stuff should now work, and it should get word counts in several cases. This requires a new library, so users who run from source will want to rebuild their venvs to get the tech. If you noticed that colour picker controls would take several seconds to open and close, it shouldn't happen any more! It was a weird hack I injected to get around a Qt bug, and it seems to be fixed in the current version we use. However, if you find you suddenly cannot select any colours in the gradient square of the colour picker, that it feels like your mouse is drag-clicking the whole time, then I guess the bug wasn't fixed for you, so let me know! future build I am making another future build this week. This is a special build with libraries that I would like advanced users to test out so I know they are safe to fold into the normal release. More info in the post here: https://github.com/hydrusnetwork/hydrus/releases/tag/v576-future-1 next week I have two more weeks of work before my Summer vacation. I'll try not to start anything too big and just do cleanup and little fixes.
Does anyone know if there's any decent tools to scan and strip steganography bullshit from your image collection? Seems if anyone downloads any images from 4chan (maybe others) there's a small risk of embedded data being present from people using embedding tools.
>>15233 >Can you explain how you ended up with 11 tag services, to give me an idea what i could do in future. Each for different boorus like the Hydrus site suggests? I have too little experience with this so excuse me, my hydrus has yet to be filled. The hydrus site says: > "You might like to add more for different sets of siblings/parents, That's why I had separate services for porn and private tags. > parsing tags into different services based on reliability of the source or the source itself. You could for example parse all tags from Pixiv into one service, Danbooru tags into another, Deviantart etc. and so on as you chose." That's why I have services for different boorus and 4chan. The first tags I added to the PTR, I moved to a service that is now called "MLP publishable tags", but not all are publishable, because I didn't know what to add. There is still a "my tags" service, because a file is kind of public, but only a hundred people might care.
>>15245 Also two services for AI tags. A service for subscriptions, which for some reason contains tags from 4chan and alamy.com downloaders. A temp service. Downloaders can add tags to multiple services at the same time. > What does "tags you don't want to see but still search by" mean? I can't guess.
>>15226 Thanks for explaining. I'm not sure I can easily add this right now since the multi-file discovery/parsing happens after the file limit is applied, but I think future versions of the downloader system will have more elegant handling of common error states like 403 and 404. It'd be super nice to have better handling of multi-file posts overall. I basically had to hack in the current handling to deal with Pixiv, but it'd be nice if hydrus could better remember 'this file is the third file of this post' and we could check the known url [ url, 2 ] instead of just having [ url ], and then multi-file posts might have super fast checking. Maybe URL Classes could define themselves as 'will never change' (e.g. a multi-file tweet) vs 'can be edited' (some weird gallery/pool 'post' url), which would also help the logic here. Anyway, let me know how you continue to get on here. >>15228 >I still have 140000 pending tags there. Ah, I see. This is tricky, because you can't pend tags until your are 99.8% synced these days. You can keep those pending tags if you like, and pend them in some years when you go PTR again, or you might like to use tags->tag migration to send them to a new 'local tags service', which would be a simpler container. If the tags are mostly just from booru parses over the past couple of years, I'd say you can probably just discard them since other hydrus users will have covered most of them--your client just can't see that, since it isn't synced. >>15229 Yeah, I wish I could keep up import/export capability with the different database metadata better. The database just has a ton of stuff these days, and it is so tempting to add some new feature without catching up on the 'overhead/admin' side of things in the following weeks. My best hope for this stuff, I think, is sidecars. I can add a new datatype to that fairly easily. I should prioritise ratings. >>15233 >Bug: Want to report a bug, that is somewhat rare. I activated the "close the main window to system tray" option. I found out, which might be coincidentally, that when i close the window by the "X" on top right, the program might freeze after trying to open it again from the system tray after i was away from the computer. It will open, but i cannot click on anything in the main window anymore at all, i can only minimize it to the task bar again and maximize back and so on. But inside the program nothing is clickable, also without any visual mouse highlighting. So i have to kill it in task manager. It does only happen rarely, i never turn off my computer and only go into "lock" mode on windows and not standby or anything. Sorry for this. I don't know what causes it. Some computers get it, I cannot reliably reproduce it. It seems to be some Qt event-processing clusterfuck due to some shitty UI code I wrote probably ten years ago, so I'm mostly hoping I just rewrite this code one day and accidentally fix it. AFAIK It isn't reserved just to the 'close to tray', but any long-term minimise to tray action can cause it, so it is very interesting you do not see it at all when you just minimise to tray. I will investigate this difference. If you learn anything else on what seems to trigger this bug, please let me know. >>15237 Shit, thanks. I was originally going to set 20 as the default and doubled it to 40 later in development. I must have forgotten to update the actual options widget. I don't have 'remember what the user set before they set None' tech yet, so I try to simply assign whatever the normal client default is in these cases.
>>15235 No, you are fine. You can do them while the client is running, too, I think. Always good to make a backup anyway. If you want to poke around, I'd probably recommend getting SQLite Browser. It is a good UI program. Make sure you shut the client down first if you use it, though. Happy to answer any questions if you want to talk about schema. >>15241 >network > downloader components > manage parsers > the parser > subsidiary page parsers > posts > content parsers > filename > edit formula > no string processing > add > String Converter > add conversion type: regex substitution who designed this shit UI >>15242 Yeah, some of this is limited and hacky, originally from legacy systems that worked a different way. I'm in a very long-term process of updating more ancient and even shittier regex across the program, mostly the stuff in filename tagging options, to what you are looking at in the parsing system. Once everything is on the same page, I'll be more able to update it to handle \1, \2 and named matches explicitly. >>15244 I've used optipng before, but in general I'm not fond of optimisation tech for hydrus work because stuff like the PTR and your downloaders rely on file content (and thus hashes) not changing between users. If you were to re-work everything that comes into your client, then you would see a LOT of new duplicates, which tends to be a larger hassle than just dealing with some +5% filesizes here and there. There's similar arguments around re-encoding to more efficient formats, but, that said, the mass-dupe-resolution problem is something we'll have to deal with seriously if and when Jpeg XL (a great new image format) finally goes mainstream, or if AI upscaling tech becomes ubiquitous (another reason I am currently working on auto dupe-resolution). I don't think I've seen an example of attack-steganography used in the wild, although I imagine it has happened. Have you read anything about this recently? Actually, now I think of it, I think I remember someone injecting IDs into some faux-viral content as part of some advertising campaign, many years ago. They wanted to track how the files disseminated across the internet, who was downloading and then re-uploading them. These days, cloudflare and the other big CDNs routinely strip metadata or 'optimise' for transit, so I imagine that any modern malicious attempts to track file transfer histories are probably overwhelmed by corporate interests to save 3% on their bandwidth bill. I seem to remember reading that Facebook now strip all EXIF etc.. data from all uploaded images simply because they had a million dox headaches from their userbase uploading stuff straight from their phones, but I may be remembering that wrong. You could probably do it if you injected steganographic content into the actual image data though (think like a magic-eye image, but something a machine sees rather than a human), and made it error-tolerant to survive crops and resizes and re-encodes, but all I know about that is experiments in the early internet days. I imagine it never panned out since you don't hear much about it any more. If you are a bad guy with the resources to make that attack, it is easier just to track people with cookies and access logs and so on. Btw, if you are looking for a way to strip your files before uploading to imageboards etc.., you can just do a search for 'strip exif open source' and you'll get a bunch of good results.
(10.09 KB 512x193 Fanbox.png)

So I have a guy on fanbox who posts a ridiculous shitton of variations of his pics, packed in dozens of zip files (that are thankfully recognized as cbz) and psd files, which are all huge and take long to download. It's a fulltime job to stay up to date with this guy's posts. I absolutely need to parse the filenames, otherwise this mess is impossible to navigate. I managed to modify the downloader from the cuddlebear github, and add filename parsing to the "fanbox.cc post api parser" --> subsidiary page "article files" -> content parsers. Thanks to this I can now send fanbox posts to hydrus, and it downloads them with filename tags for downloadable files. The problem is, this doesn't work when using "fanbox.cc artist lookup" for some reason. The downloaded files have no tags at all. Even the "title" tag and fanbox tags (listed below posts) are missing. Is it not using the "fanbox.cc post api parser" for downloading the individual posts? Or is the "fanbox.cc creator api parser" using its own post handling? It has a subsidiary page parser named "posts", but this one in turn is missing the subsidiary page parsers named "article files", "article images", "post files" and "post images", that the "fanbox.cc post api parser" has. I also haven't figured out what the difference between "article" and "post" is. In the "fanbox.cc post api parser", I tried adding the filename content parser to "post files" too, but it made no difference. Not sure when that's even used. Any help to get this working for the artist lookup would be highly appreciated. My edited downloader is attached.
This is a long shot, but I feel like I'm wasting a ton of bandwidth with subscriptions for artists that I really only want to save the cream of the crop from. Would it be possible to make a sub only grab thumbnails so I can pick out only what catches my eye and then have it download the full images from that selection?
Synce I updated to 573 I'm getting this shit lately from gelbooru, never happened before Are they doing stuff against scrapping?
>>15250 At that point, why not just look at the page in your browser?
>>15252 That's what I normally did before, but Hydrus makes things far more convenient than keeping a long list of artist to check once a month, which I already have for those I can't easily subscribe to with Hydrus, or for which the "cream of the crop" is excessively small, but still worth the occasional look.
>>15251 Reminds me of countless ignored / dummy items on tbib whose links I can delete from the subscription logs but they are prompty re-downloaded again, never leaving that queue. On gelbooru, however, this has never happened to me.
>>15248 For the steganography it's mainly to make sure there's not any degen shit I'm unaware of making its way into my collection if I ever download something from 4chan etc, nothing too sophisticated past the PNG Extra Embedder kinda stuff that I've seen shitposted about when someone's being blatant with it.
>>15255 >PNG Extra Embedder That hasn't worked since... May 29, 2023 apparently. https://git.coom.tech/fuckjannies/lolipiss >4chan finally patched PNG and JPEG steganography by recompressing everything uploaded. IT'S OVER
Found a weird little bug: if you copy a selection of urls from the url manager, the last url will become the first in the list.
>>15245 >That's why I have services for different boorus and 4chan. Thanks for your answers mate. >>15247 >If you learn anything else on what seems to trigger this bug, please let me know. I will observe it. @Hydev, could you spend the time and answer the following questions please: 1. Can you explain what >>15233 >"tags you don't want to see but still search by" on the hydrus site, when it comes to having several tag services, means? Trying to understand the idea behind it and see if it is useful for me. I guess not the same as the "manage tag display and search..." option? Following user also couldn't tell me: >>15246 2. Can you answer >>15233 >Also i guess it is by design, that you cant activate two different tag services without having to activate the "all known tags" service, otherwise this would already be there? For example you cant activate SFW tags + NSFW tags together (or whatever you named ur tag services), only always one will work or "all known tags". Would it even make sense to allow it like with file domains? See it as feature request in case it would be possible.
I had an ok week. I fixed some bugs and cleaned up some UI. There's also support for 'open with' and 'file properties' in Windows. The release should be as normal tomorrow.
Is there an easy way to replace nonfunctional ugorias with mp4s or whatever hydownloader saves them as? I guess I could tag every one that I've exported and just delete them when they've been replaced? Unfortunately I have literally thousands of these.
Is a downloader for Amazon product page possible?
>>15262 Inside Hydrus? Nope. Though I remember the dev talking about wanting to add support for ugoira archives in the future. >>15263 Yeah, should be.
The gelbooru downloader doesn't seem to get source urls if there are more than one, instead I got "https://gelbooru.com/%7C" Here's the file: https://gelbooru.com/index.php?id=10118626&page=post&s=view Looking at the parser, it looks like they changed the url delimiter from " ", to " | " (unless the user can put whatever they want there, but I don't know how that works).
https://www.youtube.com/watch?v=9QY0OJ8RHYE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v577/Hydrus.Network.577.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v577/Hydrus.Network.577.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v577/Hydrus.Network.577.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v577/Hydrus.Network.577.-.Linux.-.Executable.tar.zst I had an ok week. There's a mix of small improvements, and some neat OS integration. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Thanks to a user, we have some cool new OS integration this week. For Windows, you can now go open->in another program on a thumbnail to get the Windows open with dialog, and open->properties to get the normal Windows file properties window. Also, the various places that can view a file in your OS file explorer now work better, and are better about selecting the file(s). Users who runs from source will want to rebuild their venvs this week to get this new stuff. When you have a single 'edit time' panel, like the sub-panels for modified and archived time in the 'manage times' dialog, now accept any date string for the paste button. So, you can now paste something from your web browser or just 'yesterday' or something, and it should all work. This uses the same date parsing tech we have had success with elsewhere. I broke the 'media' help page into two and added some new checkboxes to set whether to duplicate the text of the different hover windows in the background of the media viewer. If you want a clean background, it is now easy, under the new 'media viewer' page. I fixed some more URL encoding stuff, this time related to brackets in parameter names, and buffed the 'clear orphan tables' database routine. If I have been working with you on either of these, please try your thing again and let me know how it goes. next week This is the last work week before my vacation, so I'll just do some simple cleanup work again.
>>15266 clean media viewer background stuff is great, thx
I recently hopped in the Stable Diffusion train and I'm wondering if there's any way to do some stuff with it along with Hydrus, mostly this: 1 - Import SD-created images directly to Hydrus while parsing and adding metadata as tags 2 - Using Stable Diffusion auto tagger to tag images in batches (I think it's called DeepDanbooru or something) Right now I'm already importing all my gens to hydrus and deleting whatever is left in my SD folder, but I can't get any relevant info from them other than filenames, folder paths and creation dates. Any starting point for this? Topic 1 is the most relevant for me atm.
Is there a way to pull data (tags, links) from a booru for a previously deleted image and transfer them to a duplicate that exists without undeleting? For example I get an image from an artist's twitter and pixiv, but the twitter version is better so I delete the other using the duplicate filter, but then later the pixiv image shows up on a booru and I want to pull tags from it to the version I kept, but trying to download it, it will try to pair it with the deleted image and throws an error. Would be nice if the system automatically detected there is a better duplicate and transferred the data there. >>15268 >1 - Import SD-created images directly to Hydrus while parsing and adding metadata as tags https://github.com/space-nuko/sd-webui-utilities/blob/master/import_to_hydrus.py It's a bit old, but it still works, though there are some cases where it imports badly (if the image didn't have a negative) and It crashes for some images generated using comfyui. >2 - Using Stable Diffusion auto tagger to tag images in batches (I think it's called DeepDanbooru or something) Use this inside webui https://github.com/picobyte/stable-diffusion-webui-wd14-tagger You can have it run on a folder and it generates txt files with tags for every image, then you import those (both images and txt files) into hydrus and load the txt files as sidecars. This doesn't generate rating tags though (it can detect them in the ui, but they aren't saved into the txt files). There's also https://github.com/Garbevoir/wd-e621-hydrus-tagger which is the same thing but standalone and it works with hydrus, so you don't have to export your images, tag them and import back, which is slow. But I think you need to install the cuda toolkit as standalone or gpu mode won't work.
I am wondering if any of the countless maintenance/repair/check/regenerate options should be used by the normal user from time to time because stuff gets actually wrong within the databases/caches sporadically/regularly or are they just there for ermergencies after hardware failures or unexpected user errors, for example deleting stuff in the windows file explorer from the hydrus direction. So if i dont have any hardware failures and use hydrus like "it is supposed to be used", i should not need those options at all, correct?
>>15266 >I broke the 'media' help page into two and added some new checkboxes to set whether to duplicate the text of the different hover windows in the background of the media viewer. If you want a clean background, it is now easy, under the new 'media viewer' page. I wanted that for some time, thank you! It is pretty good and a huge improvement as it is now, but if i can add some suggestion to this: I suggested to add an eye icon to the top hover menu in the media viewer in the past, for faster and easy access to those 5 checkbox options. I hope you didn't strike it from the wishlist yet <3. In my mind, you would press the eye icon and it would show you drop-down menues exactly like when you click on the file/tag domain buttons on the left in the hydrus search pane, with little checkmarks that you could activate/deactivate on the fly without going to the options menu. And as a second suggestion: right now you only make some of them invisible. Maybe you could add an "deactivate" box for at least the tag pane on the left. The reason is mainly if you for example zoom into a picture and drag it with the mouse and the cursor appears over the now invisible tag window, it will pop up again, which covers the picture again with the tag pane unless you move the mouse cursor away again. So the left side is kinda a hot zone that you want to avoid, which might get a bit annoying if you do that alot. Since it is new i am not sure if it will annoy people, so we will see. The tag pane is quite big so on this one it could happen somewhat often. So you might add a second checkbox to deactivate those fully (at least tag pane) in the future, so everyone can set it up as they please. Thank you!
>>15269 Thank you for the resources! I can't try them out until sunday but just by reading through the links it seems like exactly what I needed.
>>15249 I don't know anything about fanbox so I cannot talk too cleverly, but your downloader here looks good to me. You are using subsidiary page parsers and stuff all correct. You might like to play around with help->debug->report modes->network report mode. Turn it on and run a request here, and you'll see a bunch of spammy info about which URLs and parsers are being loaded. Maybe there's a weird connection somehow? If that all seems fine, check the file log. You should be able to right-click on the import jobs and see which tags and stuff they parsed, so maybe you can compare the ones that work to the ones that don't and see the critical difference? Also, although it sounds dumb, this has caught people before, is there any chance the gallery download tab you are testing in here is an old tab and has custom tag import options or something? Right-clicking on the file log items should reveal this, since they'll have tags there, once processed, no matter what the tag import options are. >>15250 >>15252 >>15253 Sorry, I don't have time to make a new downloader workflow and interface to handle this. I like the idea, and several users have asked for similar (iirc, I think imgbrd-grabber works like this?), but I shouldn't try to promise it since it work be too much work for now. Your best answer is to work in your browser, or, if the site you are using has metatags like 'rating>3' or whatever, you might be able to shape your existing subs to filter out the most common 'bad stuff'. A similar thing, although it sounds stupid, would be to figure out what of their stuff you like or don't like and either add tags to the query to only get the good stuff (e.g. make the query 'artistname fishnets'), or play around with custom tag import options or something to blacklist what you don't. >>15251 This is on subscriptions, right? Can you go into your gelbooru subscription here and zoom in to the actual query and then the 'file log'? Scroll down to find the files that failed and you should be able to to see the actual error messages--are you getting 500, or 429, or connection errors? >>15255 >>15256 Ah, thanks. Yeah, that's an interesting problem. Sounds like it is more a thing for legacy files. You might be able to root them out yourself, or verify you don't have any, by doing some searches like 'width <= 640, height <= 480, filesize > 4MB' or something? Or maybe try some clever sorts by 'file->approximate bitrate'. Probably a job for an external script that used the Client API though.
>>15250 In theory, you could make a parser that only gets thumbnails and links, then you copy the links of stuff you want and paste that into a normal downloader. You'd have to swap the parser manually every time though.
>>15257 Thanks, how interesting! I'll check it out. >>15258 "tags you don't want to see but still search by" could mean stuff like 'page:3' or 'chapter:7' or 'medium:ai generated' (and indeed many/all medium tags). Those tags can clutter your taglist when you are browsing around, but you still use them for other purposes including searching. But you are asking about separate services, so maybe this would include, say, a bunch of stuff you imported from your disk that came with tags from an old tagging system. You might want to add the 'old:fxt4-23' or 'imported from D:\processing\sonic x dbz\12' weird tags you used in the old system that mean something important to you (but perhaps are pending a conversion to something else), but you don't want to clutter your tag sidebar with them. You could import all those tags to 'my tags', but then they'd be mixed in with more normal stuff, so instead I'd say create a new local tag service called 'my old weird tags' and then you could make a blanket rule for that tag service in manage tag display and search to just say like 'hide everything'. This would be easier than trying to mix it all in 'my tags' and having many rules that targeted every tag type you wanted to hide. >Also i guess it is by design, that you cant activate two different tag services without having to activate the "all known tags" service, otherwise this would already be there? For example you cant activate SFW tags + NSFW tags together (or whatever you named ur tag services), only always one will work or "all known tags". Would it even make sense to allow it like with file domains? This isn't so much by design, it is just how things used to be for both file and tag services, but when I updated to the 'multiple local file services' tech, I added a LOT of code behind the scenes to allow people to choose an arbitrary union of file services. The tag search code is still on the old system of 'one service', with a special service that is the pre-computation of all of them unioned. I plan to extend tag search to allow an arbitrary service union, using exactly the same UI as for file services, but it'll take a lot of complicated db work to pull off. >>15262 >>15264 Yeah one day we'll support Ugoiras natively. We have a plan, we need to think a bit more and adjust the downloader and do a couple more things, but we see the way. My current suggestion is to hang on to them for now, and one day they'll just work nice. Fingers crossed in some years from now we'll have pleasant native conversion tech, too, so you'll be able to convert to webm or apng or whatever. >>15265 Thank you, I will check this out! >>15269 >Is there a way to pull data (tags, links) from a booru for a previously deleted image and transfer them to a duplicate that exists without undeleting? For example I get an image from an artist's twitter and pixiv, but the twitter version is better so I delete the other using the duplicate filter, but then later the pixiv image shows up on a booru and I want to pull tags from it to the version I kept, but trying to download it, it will try to pair it with the deleted image and throws an error. Would be nice if the system automatically detected there is a better duplicate and transferred the data there. Not really. I have a long term plan to add 'retroactive' duplicate merge, so I can say 'some years in the future this will be available as a one-click job', but for now it would be something you'd have to figure out with a script using the Client API.
>>15270 Yeah, you can leave them alone. Sometimes there will be a miscount in autocomplete tag counts or a sibling lookup problem because of a logical error I made in the code, but most of those jobs are for debugging purposes or fixing database problems due to hard drive fault or stitching a weird backup situation together. When the error is something I did in code, I usually run the related maintenance job in the update code that week, so you don't have to do anything. If you promise to make a backup first, you are free to run any of the 'regenerate' ones for fun though. They should all be nullipotent to a healthy database. Try regenerate->local tags cache as a safe one that should be pretty quick. It isn't too exciting, but you'll learn the process. >>15271 >eye icon Thanks for the reminder. I will do this, and it'll be the first stub that we can hang all sorts of view and 'always on top' kind of options off. Also, adding full 'disable the actual hover window' settings is a good idea. I agree the tag hover can be a pain. I'm finally at the point where I'm happy with how the hover positioning code works, so I'm more open to adding customisation stuff. We might want to allow different widths and stuff, for instance.
>>15277 >Yeah, you can leave them alone. Noice! >>15277 >Thanks for the reminder. I will do this, and it'll be the first stub that we can hang all sorts of view and 'always on top' kind of options off. Also, adding full 'disable the actual hover window' settings is a good idea. I agree the tag hover can be a pain. I'm finally at the point where I'm happy with how the hover positioning code works, so I'm more open to adding customisation stuff. We might want to allow different widths and stuff, for instance. That's super cool. One of the biggest features for me that is missing right now would be the following: It would be nice if we could for example set the tag pane/window as "always on top" as you said, then change the width of the pane by dragging it, just like you can do with the search pane in the thumbnail view, to make very long tags completely visible. That's probably also what you were thinking. I don't know if the tech is there, but if you could also set the media in the media viewer to resize/rescale according to the widening of the tag pane (optional ! maybe add another checkbox/checkmark), it would be completely awesome. Let's say you widen the tag pane to 50% of your monitor, that would mean the other 50% on the right side are the "new" available space of your monitor and the media scales according to that 50% space. And if you make the tag pane thinner again, the media would scale on the fly to the new space available. Im sure you know how i mean it. Not sure though if the media should be really the center of the "new" area, or be attached most left of the new area next to the tag pane (so it kinda looks like on boorus) and then scale from there. Imagine a 50x1000 picture, that would have alot of blank space between the media and the tag pane. Maybe also make that optional, like set the center to left/mid/right of the new area? It would really feel and look like different boorus and better (which of course it is already ;). Having your tag pane always visible and adjustable (visibility/always on top/width) on the fly, while therefore also having the media displaying fully and adjustable on the fly, would be a HUGE win. Without that rescaling thing, the tag pane would hide alot of the media because it is "on top" of the media. Of course you could argue, that the media wouldn't be centered anymore (boorus arent also), but thats why the "resize/rescale according to tag pane/window size"-option would be important and should be decoupled from "always on top", so everyone can set the media viewer how they please within seconds. Not sure yet how the other hover windows should be affected by that resize, or if they should be affected at all. That would be on you to find out in case you consider to make this happen. I really would love that. Sorry for going overboard with my ideas :D. Maybe you planned some already anyway. Thank you also for taking the time and answering all the questions here. You are awesome !
For some reason any subscription I make for rule34.xxx is immediately dead and fails to find any files, getting 403 errors. I can load the site just fine in my browser and made subs for it as recently as yesterday. Did they just change something that's going to fuck up all my subscriptions to the site?
>>15274 >is there any chance the gallery download tab you are testing in here is an old tab and has custom tag import options or something? Thanks! Exactly that was the problem, lol. Works perfectly now, indeed uses the correct post parser, and adds all the tags I want. And I already have the next question. I know how to use regex, so I would like to use it to extract some google drive urls from patreon posts. I get as far as extracting the post's text (raw html) and sending it to the string processor. My regex works fine, but where do I put it? If I put it in "String Match", the whole string is matched, not just my regex capture group. "String Converter" only has "regex substitution", which removes the only part from the string that I want to keep. I need the inverse of that option, but can't find it.
I went to drag and drop a file to post it on 8moe, and as I did so, Hydrus vanished. It was instant, no freeze, no asking me if I wanted to wait for the program to respond, Hyrdus just ceased to exist. Assuming it was just an unusual crash, I tried to open it again with the shortcut, only to be told the file the shortcut pointed to no longer existed. Hydrus_client.exe is just gone. "Hydrus Network\hydrus_client.exe" is where the file should be, right? Should I just run Hydrus.Network.577.-.Windows.-.Installer again? Everything else including my client files seems to still be there? Do I need to go to a backup? My last full backup was only a few days ago.
>>15283 Apparently Windows Defender antivirus thinks it's a trojan >Detected: Trojan:Win32/Wacatac.B!ml >Affected items: >Hydrus Network\hydrus client.lnk >hydrus_client.exe It seems this is common false positive by Windows Defender. Could I have downloaded something else that then hid itself in my Hydrus files? Or should I just restore the files and assume it's entirely a false positive?
>>15284 It is a false positive. Additional Windows integration was added in this release using direct calls to Win32 and a new library. The client exe has been submitted to Microsoft directly and you can do that yourself too.
>>15285 Ah. I had an inkling it could have had something to do with >>15266 >Thanks to a user, we have some cool new OS integration this week. For Windows, you can now go open->in another program on a thumbnail to get the Windows open with dialog, and open->properties to get the normal Windows file properties window. but dropped the idea when I saw the likely false positive trojan notification.
I had a good week making some simple improvements before my break. There is also full support for animated webps! The release should be as normal tomorrow.
There is a matching field in "manage url classes", but there is no way to switch to the matching line or search for it. Even after deleting all of the mastodon and nitter classes, there are a few screens to scroll.
>>15144 >if you have html inside JSON, then yeah try a subsidiary page parser Okay, so I finally got around to trying it, but I have an issue. The content parser version of grabbing the text, applies the resulting note to every file that the parser grabs. When I tried changing it to a subsidiary parser, I am able to clean the html like you said, but now the note is counted as a separate file result, so it's not being added to any parsed files and basically the note is just being discarded. I want the subsidiary parser note to be added to every file just like the content parser version does, but I don't know how, because subsidiary parsers count as their own separate result, so I'm stuck.
https://www.youtube.com/watch?v=ieKs9G1YBl4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v578/Hydrus.Network.578.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v578/Hydrus.Network.578.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v578/Hydrus.Network.578.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v578/Hydrus.Network.578.-.Linux.-.Executable.tar.zst I had a good week mostly doing some simple work. In a bonus, animated webps are now fully supported. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Animated webp decoding is not widely supported, but we discovered a method this week and I plugged it into my old (slightly janky) native animation viewer. I have made it work just like the for gif and (a)png, where, within hydrus, images and animations will count as different filetypes. On update, all your existing webps will be queued for a rescan. If they are actually animated, they will become 'animated webp' and get num_frames and a duration, and they'll play animated in the media viewer. Let me know if you run into any trouble with it! I added an 'eye' icon menu button to the top hover window in the media viewer. It has those five new checkboxes for 'draw stuff in the background' I added last week, and I expect to add 'always on top' and similar options to it in future. The 'known urls' media menu has a couple changes. It is now just called 'urls'; it gets a 'manage' command, moved from the 'manage' menu; and you can now open an URL 'in a new page' (i.e. opening a new search page with 'system:known url=blah'), so if you need to find all the files that have a particular URL, this is now just one click. next week I am now on vacation for a week. As normal I'm just going to disappear to shitpost fake-E3 and otherwise try diligently to achieve nothing of any importance. I'll be back Saturday 15th, with v579 on Wednesday 19th. Thanks everyone, see you later!
If I have Hydrus installed on a drive other than my C: drive, and I'm moving all my stuff over to a new computer with the same OS, can I just transfer the contents of the drive altogether without reinstalling? The drive is internal on a craptop, so I can't just hook it up to the new computer, nor do I want to, since it's an HDD and I'm switching to an SSD.
>>15292 bye bye have a good vacation!
>>15293 Yep, everything is portable: https://hydrusnetwork.github.io/hydrus/database_migration.html If you use the Windows installer, you can install to the new computer after the copy, just select the (new) directory you made as the install destination, and it'll figure the Windows uninstaller shit out. If you use the Windows extract release, then everything is 100% completely portable. Make a backup before you do anything!
>>15295 >Make a backup before you do anything! Of courshe. I make a Windows Disk Image Backup on an external drive every month automatically, and will do it manually before I begin transferring shit over.
>>15292 Enjoy your well deserved break Hydev
Does Hydrus have the option support different profiles yet? I want to use 2 different databases with separate tags and settings. I really don't want to have to run 2 different instances of Hydrus.
>>15292 >I added an 'eye' icon menu button to the top hover window in the media viewer. It has those five new checkboxes for 'draw stuff in the background' I added last week, and I expect to add 'always on top' and similar options to it in future. GOAT! Have a nice vacation! >>15300 Not that i can think of. For seperate tags you obviously can add other tag services (which isnt probably what you want), but settings are set globally. Saving/exporting them for another hydrus folder/instance isnt possible yet also. Seems to be a not so easy problem to solve according to Hydev afaik. How does one run 2 different instances at all? Rename the .exe from the second Hydrus folder?
>>15292 Have a good break!
I have this twitter hashtag that I browse daily and select some posts to download. Can I add that to a watcher and just delete whatever I don't like? Just pasting the hashtag link (with "&src=hashtag_click&f=live") to a watcher doesn't do anything.
>>15301 You launch hydrus with the -d path_to_new_db_folder argument, and you can add that argument to a shortcut. This uses only one install, you can have any number of separate db folders you want this way. Shame you can't copy or compare settings between instances easily though.
Is there any way to turn off the cbz support? I'd rather zip files be shown as zip files right now since hydrus doesn't support viewing cbz yet. Now it gets a bit confusing when some zips have thumbnails while others do not. Not a big deal but still.
(242.07 KB 577x379 vncviewer_zmXAEs143l.png)

So i've been dipping my toes into home servers, nothing major, but after an unfortunate day where my power got cut twice in the span of 3 hours (and i foolishly had no UPS), hydrus appears to have some major difficulties. On initial startup I can still use it, and my downloaders are still mostly working, but any PTR processing results in this message (pic attached) eventually being printed in the terminal repeatedly, followed by massive slowdowns if I do any action beyond just basic file browsing. Any ideas on what to do? Also, is there any way to get a more verbose terminal/log output, or is this as detailed as it gets?
>>15279 Seems like r34 subs started working again, and when they did, the sub I had made that I wanted to only get 1 file at the beginning, as I had already manually checked what I wanted, instead got 100 files as the default regular check limit.
If there is a real example text (not "example text) in "edit path component" for "any characters", and no text for "fixed characters", the example text should probably be copied when switching from "any" to "fixed".
How do i fix the MPV is unavailable error on kubuntu 24.04? apt install libmpv1 returns "Unable to locate package libmpv1"
>>15301 >How does one run 2 different instances at all? Rename the .exe from the second Hydrus folder? For me it works like 1 version of Hydrus on an external HDD and 1 version installed on my PC. Both run separately with zero issues. I think it helps that I use the portable version and not the .exe
I have a problem with gelbooru where trying to do a favorites gallery search fails. It looks like you need to be logged in to do the search, but even when I send the cookies to hydrus, it just redirects to the account home page.
Every day when I shut down my pc Hydrus starts a maintenance, is that normal?
When downloading stuff, is it possible to make a filename parser that gets everything between the file extension and the previous slash? Example: https://pbs.twimg.com/media/GPs1Xp1bkAAGqKB.jpg?name=orig becomes GPs1Xp1bkAAGqKB Check the .jpg and the first slash after it (from right to left?) and get only what is inbetween
I noticed the gallery downloader page default import options behaving differently than before. I have a persistent gallery dl tab where I paste md5 hashes to pull tags from boorus for pics I download from somewhere else into various file domains and it would keep the domains unchanged for the files, but now it also puts them into my files, which is the default. It's also possible I had it set up differently before, because I recently closed the tab and made a new one, but I can't find any option related to this. Any ideas?
>>15321 (me) I'm attempting to do this to my files. What I'm trying right now is exporting every file that has a twitter link and adding a .txt sidecar after running it through this regex: ([^\/]+)(?=\.[^\/]+(\?|$)) and it seems to work (pic 2, regex test site). But whenever I try to export the sidecar, it doesn't work (settings in pic 1) What am I doing wrong?
>>15323 I think you're trying to process the raw sidecar json inside sources, you should move the regex to string processing outside the sources.
>>15324 Thanks for the heads up! Sadly it still doesn't work, even though I'm grabbing the correct URL and the regex works fine anywhere but in the processing steps window (pic 2). When I'm editing the processing step, it looks like it is working properly (pic 3)
>>15325 That's just matching I think, that means it will still take the whole string if your regex matches it. You should add String Converter instead of String Match, then add a Regex Substitution and add something like this to the regex pattern: ^https://pbs.twimg.com/media/([^\.]+)\..+$ and then this to regex replacement: \1 That should take whatever is between "https://pbs.twimg.com/media/" (which should be always the same) and the period.
>>15326 Wow, it worked! Thank you so much!!
>>15327 No problem. Also maybe check if all your twitter links have the same format. I remember a link like https://pbs.twimg.com/media/GPs1Xp1bkAAGqKB.jpg?name=orig can also be https://pbs.twimg.com/media/GPs1Xp1bkAAGqKB?format=jpg&name=orig which should be exactly the same quality. But if you've only used the recent twitter downloader, you should be fine.
>>15328 Nice tip, I'll double-check them. Recently I redownloaded a bunch of twitter files so it should be alright, but it's worth looking into!
>>15322 Ok, so apparently it does it for the danbooru downloader (which I didn't use much until now), but gelbooru is still fine. How come? I don't think there is any setting in parsers and other components that would force a file service.
(174.71 KB 1366x768 2024-06-12_03-57.png)

>>15301 >How does one run 2 different instances at all? Rename the .exe from the second Hydrus folder? >>15304 >You launch hydrus with the -d path_to_new_db_folder argument, Here a screenshot with an example. A pic is worth a thousand words.
How many tag siblings and tag parents are in the PTR? Are they small enough that you can export just those and upload somewhere? I won't have 100GB available for a few days, so if someone's already got the PTR in action, I'd appreciate if you could let me know. --- I'm asking because I'm making an alternate front-end to some boorus which effectively don't have aliases (e.g. rule34.xxx). At first I looked at the r34 aliases but so many are trash (e.g aliases that work quote "9 out of 10" times, troll posts, implications in the wrong order listed as aliases) that I decided to check danbooru, which is much better and far, far easier to process but it still has glaring mistakes (e.g. https://danbooru.donmai.us/forum_topics/26068 where over 20 users saw absolutely no problem making `goth_girl` alias to `gothic_fashion` so searching `goth_girl` matches goth boys, e.g. `dog_walking` alias to `pet_walking` so it returns children on leashes). I also see tags which would have many different manifestations on rule34.xxx end up with no aliases or implications pointing at it whatsoever. I'm not sure if this is because it caters to a more disciplined userbase who actually adhere to basic tagging guidelines, or if the userbase is so different the subject didn't get explored.
Is it possible to treat set a veto as an error? There's a site I use where, upon there being an temporary error such as a 509 or 502, instead of returning the error directly, It loads the page just fine, but then in the place of the image, is another image that says the error. It's annoying. Particularly in the case of 509 errors. I'd like to treat that url as being a "limited bandwidth" error by hydrus, so it pauses the downloader for a little bit, before trying again and continuing if it works.
has anyone else noticed that saucenao seems to never return posts made from at least december 2023 onward? I use saucenao a lot with hydrus, so this is pretty frustrating. I can't find any info on what's going on, and the status for the pixiv index says that everything's fine.
>>15338 I have no problem finding recent pics on saucenao. https://danbooru.donmai.us/posts/7661375
>>15338 >>15339 Though, you seem to be right in a way, it only ever returns gelbooru and danbooru.
>>15291 >now the note is counted as a separate file result I don't have this issue. Is your subsidiary page parser creating multiple results? You should only be creating one result. You could also post your parser so we could look at it.
(60.29 KB 956x623 The Boning.png)

So I fear the boning may have come for me. I noticed the error report on the left upon waking up this morning, and after closing [which may have been a mistake, I am not sure, I am an idiot], now I am unable to actually re-open it at all due to the errors on the right, which have slightly different wording before and after a reboot. I require assistance on how to proceed from here.
>>15339 right, that's what I meant. Gelbooru works fine. Pixiv doesn't
(7.19 KB 512x154 kemono post api parser.png)

>>15342 okay here it is. It's a modified version of the kemono parser (but there are some other downloaders I want to do the same thing with, if I can figure this out). It has 2 description parsers. the "content parser" version, which is properly attached to the file results, but has the raw html, and the "subsidiary parser" version I tried to make, that cleans up the html, but is counted as its own separate file result, so it doesn't get added to the actual file results, and is instead just discarded. The subsidiary parser version doesn't do anything but take the html description from the json, then use a html content parser to clean up the html. I understand that its because, by being a subsidiary parser, Hydrus treats it as distinct from the other subsidiary parsers that get the files, instead of adding the result of this parser to those files like the content parser version does, but that's why I'm stuck. If this is the way that you're supposed to clean html, before adding the text as a note, then I'm not sure what I'm doing wrong. I don't know how to get Hydrus to treat this the same way it treats the content parser version. This is my first time ever making a subsidiary parser though, so I could be making a simple mistake.
>>15345 Couldn't find a way to link the files with the description, so I tried the stupidest idea. I merged all the subsidiary parsers into one using a compound formula, where each json is encapsulated in some made up html tags, then I sent those into html subsidiary parsers, which then unpack those into json parsers again... Not sure if I got everything, so check it out just in case.
>>15264 >>>15263 >Yeah, should be. It returns irrelevant JavaScript. There is some captcha cookie.
>>15345 >other subsidiary parsers Oh, I see. There's other subsidiary parsers in there. I can't think of a way to solve that. >>15346 Cool idea. It may look silly, but it works. It looks good to me. Just one minor thing: there's an extra unnecessary subsidiary post parser. Under the "files and description" subsidiary, and then under the "files" subsidiary, there's another unnecessary "files" subsidiary that does nothing (it has 0 parse rules). You can move its content parsers up one level.
(4.01 KB 512x126 kemono post api parser.png)

>>15348 >Just one minor thing: there's an extra unnecessary subsidiary post parser. Under the "files and description" subsidiary, and then under the "files" subsidiary, there's another unnecessary "files" subsidiary that does nothing (it has 0 parse rules). You can move its content parsers up one level. You seem to be right. I added the extra subsidiary, because I thought I needed to convert back to json again, since the first level uses a html formula, or because the results still weren't split and it would throw me a not a json error, but I guess I fixed that at one point and the extra step wasn't needed anymore. Anyway, I just tried to move the content parsers up a level and it still works exactly the same.
>>15278 >then change the width of the pane by dragging it >Im sure you know how i mean it. Yeah exactly. I always want more user customisability. Lots of programs offer these 'modular' UIs now, but when I think of this tech, I think of Eclipse, the IDE engine. This was back in the days when Java UI was new, but the way it all snaps together and is so customisable and extendable is great compared to what we have in hydrus. At the moment, the hydrus UI is dealing with multiple legacy issues, from my early code all being non-flexible hardcoded garbage to a jump from the wx engine to Qt some years ago that I still haven't cleared up. I want this, but I cannot promise anything I am happy with any time soon. I'll keep chipping away at my bad old code though (often in the 'boring cleanup' changelog sections every week), and slowly add new features. Let me know how it works for you as I roll this stuff out. >>15279 >>15314 Sorry for the trouble. If you search '403' in this and previous threads, you'll see some conversation about CloudFlare, which tends to be the explanation for this stuff. >>15282 I don't have nice tools yet for proper regex operations, but I think you can blag your way through to the underlying python regex engine in that regex sub rule with '\1', like pic related (forgive the weird test string, I just pasted something from my IDE, but you can see the idea). Use paretheses to capture a group and then use \1 as your 'replace. I want this to have better UI and previews and stuff in future, especially as I copy the StringProcessor system to the filename parsing/tagging system. (which has an even older, shitter regex tool!) >>15283 >>15284 >>15285 >>15286 Sorry for the trouble! I wonder if it was that new Windows properties/open with thing that triggered this, although I'll say that drag and drop has nothing to do with that, so I'm not sure. We've been hit with several of these over the years, and unfortunately there isn't a lot I can do about them. More here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#anti_virus Running from source is always a good thing to try, if you are worried, although if you trust that my code is good, then if there really was a virus in the exe, then that would mean github Actions had injected it, and the whole internet and world, I imagine, would be worrying about more than a little image database program.
Just remembered to say: what a pile of shit E3 was this year! We joke every time that it is terrible, and I enjoyed myself hanging out with other Anons calling it shit, but man what a poor showing. >>15290 Great idea, thanks. >>15291 >>15342 >>15345 >>15346 >>15348 >>15350 Sorry for how awkward this is. I originally made subsidiary parsers to fix one weird problem (multi-file thread watchers on the first flexible 8chan parser), in a sort of stealth debug mode, and now, years later, we find ourselves using it for all sorts of stuff, all without proper documentation. I will make a job to write some better help for how this works, with a diagram (I can see it in my head, even if I can't explain it well) of how the data propagates, and improve the UI. I also have a long time job to catch up on soon to make HTML/JSON switches possible without a subsidiary parser. >>15303 Not really, unfortunately. Since Elon took over, the twitter 'search' APIs have locked down almost completely (they did this because they were getting hammered by LLM trainers scraping), so unless you have $5,000 a month you cannot programatically search hashtags or user accounts (beyond some hacks to get the 10/20 latest tweets). We have decent support for single tweet downloading though, so I recommend either drag and dropping the tweet URLs onto hydrus as you browse, or bundle them newline-separated in a sticky note and paste them in one go into a new 'urls' downloader. >>15306 Soonโ„ข, with luck. I know another guy who doesn't like the CBZ stuff either and I'm just going to add some options, probably a global one and/or something in file import options, that just disables the non-zip filetypes. >>15313 Interesting! I am sorry for the trouble, and I am not totally sure what is going on here. When that transaction shit occurs, you would usually get a slew of unpleasant error messages. It sounds like your database files have some malformation and the error being raised here (presumably something to do with transaction/savepoint) is either too quiet for SQLite to raise a full error on, or my code happens to silence it in an awkward location and then is handling the recovery badly. In either case, I think the source of the problem here is that one or more of your db files has a bit of damage from the power cut. The hard drive was probably writing when the power went out and a block got replaced with 00000 or something. So, I think your next step is the 'install_dir/db/help my db is broke.txt', which has a bunch of background reading and some maintenance jobs to check how your db is. Try checking your integrity, and maybe try cloning everything (although if you sync to the PTR this will take a long time and some db space). If you still have the problem after a full clone, that suggests the problem is not in the db file (could even be something like a joggy hard drive cable that is causing ongoing I/O issues, but we'll see). Let me know how you get on!
>>15351 >>15282 Forgot my pic related.
>>15315 Thanks, great idea. >>15318 I do not know, but if you use Hydrus Companion, see if you can sync your 'User-Agent' as well as your cookies. Some Cloudflare stuff cares about that too to verify a login. >>15320 Can be, it depends on the client. Hit up options->maintenance and processing to change when and where hydrus does its 'idle' maintenance work. If this work you are presented with only takes like two seconds, then I think you can disable the yes/no box and just have it do it without bothering you. >>15322 >>15330 I changed up recently the visibility of whether a downloader page's individual queries have the same import options as the main page, which is not exactly your problem here but it may be related. You have default import options. A downloader page may use the defaults or may have something specific set. The queries in a downloader page may also use the defaults or have something specific set. Now, for your issue, the 'file import options' does set where your files are supposed to import to (under 'import destinations'). Check what your defaults are under options->importing. I suspect your different pages here have had custom file import options either beforehand or now, and this differed with your default file import options. Sorry for how un-user-friendly the display is here, I am working on it! >>15336 About 330,000 sibling pairs and 30,000 parent pairs. You can import/export them en masse with the tags->migrate tags dialog (fast, advanced), or import/export via the manage siblings/parents dialogs directly (slow, easy). Although it is still a little wasteful since you'll be downloading like 10GB of update files and bloating your client.master.db, I recommend syncing with the PTR and in the review services page, you can just hit a pause button on the 'mappings'. Your client will sync just the siblings and parents. If you only want to grab them one time and really don't need to sync, I'd say just grab this pre-processed release (stick it on a slow HDD drive with lots of free space if your SSD is short the 100CB) and then boot it one time and export to whatever format you need: https://breadthread.duckdns.org/ >>15337 I am sorry to say I do not have a nice solution here. I can't handle vetos differently since it would break other things, nor do I want to wang it with hacky options to change its behaviour. What we really need is a new sort of content parser that says 'hey if you see this raise 403' etc.. so we can grab these custom server errors better. We also need better 'try again later' tech for various bandwidth connection errors. So, I would like to help your situation, but it will take longer because I want to do it properly.
>>15343 Oh dear, this is not good! That 'disk I/O error' can be very bad. I have seen that error occur for things that were not so serious, but it can also mean your actual hard drive is having trouble. The later 'unable to open database file' one is similarly very serious, and if you are getting one and then the other seemingly at random, that suggests your drive is actually having problems right now. Either your database file has had serious damage or your hard drive is currently serving bad data. Stop writing anything important to this drive and read install_dir/db/help my db is broke.txt. These errors are serous enough, I am sorry to say, that there may not be a recovery. If you have a backup, this is the time to roll back. Let me know how you get on. If there is a very broken/truncated client.db file at the end of this process, or a client.mappings.db broken but everything else good, I can help you stitch it all back together. Your most important priority now, though, is making sure your hard drive is healthy and securing any backups you have.
>>15354 >Now, for your issue, the 'file import options' does set where your files are supposed to import to (under 'import destinations'). Check what your defaults are under options->importing. I suspect your different pages here have had custom file import options either beforehand or now, and this differed with your default file import options. My default destination set in options is to "my files", which makes sense that the danbooru downloader puts anything it download in there, but why doesn't the gelbooru downloader do that? It simply downloads stuff and doesn't put it to any new file service. So for example if I paste a hash of a file in file service "A", it will redownload the file and keep it in A, when I paste a hash of a file in "B", it will redownload and still keep it only in B despite the defaults being set to "my files". This is what I want btw. The gallery downloader tab is a freshly created one where the only thing I change is bundling queries into one when pasting, the import options for both the whole tab and for individual galleries are set to all default (pic). Then I duplicate the tab and only change the downloader to danbooru. Maybe it's because I set gelbooru as a default downloader and that somehow makes it behave differently?
>>15354 >>15356 I tried to replicate this in a clean db, so you can try it yourself. Here are the steps: >go to options and enable booru friendly hashes in files and trash, then set default download source to gelbooru tag search in downloading >create 2 new local file domains: danbooru and gelbooru >create a new url downloader tab and change the import options import destination to gelbooru and download this: https://twitter.com/yihan1949/status/1724432880422572407 >then change the import destination to danbooru and download this: https://twitter.com/yihan1949/status/1679113635510259717 You should now have two pics where the first is only in the gelbooru file domain and the second only in the danbooru file domain. >create a new gallery downloader tab and keep it as it is, copy the md5 hash of the first pic and paste it in >create another new gallery downloader tab and change the gallery type to danbooru tag search, then copy the md5 hash of the second pic and paste it in Now first pic will still only be in the gelbooru file domain while the second pic will be in both danbooru and my files.
>>15352 >We have decent support for single tweet downloading though We do! It's great, honestly. Thank you. For now I'm using a Firefox extension to grab the tabs I select and then I can just ctrl+V directly into the downloader, it's not bad at all.
>>15358 >For now I'm using a Firefox extension to grab the tabs I select please share
>>15355 So I wanted to bring a partial update on how things are going right now. For starters, what I failed to mention, is that I run a decentralized Hydrus install with the media files and database/client split over multiple different drives; I was able to, a day or so before getting your response, access the client again by using the --db_journal_mode truncate compatibility option, as per Hydrus' documentation; I have spent the evening following said response following the directions of the db is broke file so far, chkdsk reported a seemingly[?] small number of drive errors which it was able to correct, and a second, subsequent chkdsk reported no further failures. I have, in the meantime, been running sqlite integrity checks on the database files, with client.db, client.caches, and client.master all returning simple "ok" results which I assume makes for a good sign. I currently have a client.mappings scan in progress, which has been taking many hours, but the text referred to it taking this long as normal, and it seems to run, according to task manager, at an fairly average speed of 14MB/s or so on an SSD [the other db checks running at something between 15MB/s and 20 MB/s. The only further information I have to add, are that this is actually the second client.mappings integrity check I run, the first one seemingly interrupted halfway through the night because fucking windows this &%($#&@ OS decided that wanted to take THIS exact night to reboot and update without telling me about it. And that the original error, here >>15343 , happened to come up after the first time in I don't know how many years where the "suspend" option was used to interrupt the OS session while Hydrus was an open application. I do not know if either of these facts offer any relevance, but hopefully it provides a more complete picture of the situation. I will return with further report once the mappings check is concluded, whenever that may be, but it'd appreciate an input on whether the partial results so far paints an encouraging picture of the situation so far, or the opposite.
>>15355 >>15364 Status update, client.mappings.db has also returned a simple "ok" from the integrity check
(21.34 KB 451x332 2k8amf.jpg)

>>15365 Today's is just a scare, tomorrow will be an apocalypse. It is time to back everything up. Just in case.
Just returned here to possibly update Hydrus. The AI nigger talk had me worried. Even the way some retards have written the word is retarded. Then again, see the discussion about sites and content above. The very mention of shit34 speaks volumes. So does the talk regarding cancerous scraping. I saw that support for a cancerous image format has been added recently. I just extracted the ZIP file. 700+ MB, almost 3500 files. This was expected. It's all quite tiresome. Thank you for past versions.
>>15366 Yeah, I do have a backup that's a few weeks old, but I'm hoping that I won't have to rely on it, the instructions text file isn't 100% clear to me on how to proceed in case all database integrity checks return ok results, so I'm just holding off on using the system further hopefully until I can get some word on whether I should try accessing the client it as usual [sans compatibility setting] or if I should take some other further precaution before then.
Hello, i am relatively new to Hydrus and a dunce for computer related stuff. After doing a clean install the program keeps crashing while viewing animated files (mp4, gif, webm). After doing some research i assume it is MPV related as when i select the integrated viewer i can watch the files just fine (minus the sound). Updating to v578 made the crashes more consistent as the program now keeps crashing while viewing the 3rd animated file. It does not matter if i watch file A-B-C or A-B-A. I am on Windows10 and used the installer for v571 and v578 I used to have an old version (best guess around v500-v520) but did not use Hydrus much until recently and did not have any problems with it. Can i downgrade to an older version or is there a workaround?
>>14270 Is there support for parent/child post relationships (not tag relationships, to be clear)? I couldn't find anything on the UI that indicated the case, and the booru I'm mirroring has a system to identify child/parents of any one post. At the moment I'm just creating a namespace for it, but is there a better way that I'm just too stupid to see?
>>15373 Hydrus doesn't have a parent/child file relationship system, but there might be a solution depending on what you're looking for. What exactly do you want parent/child posts for? What does it mean to you that file A is the "parent" of file B? Is it that file A is a base image, and file B is a variation (like a different hair color or clothing etc), or is it that file a comes before file B in a sequence (like pages of comics.), or something else?
>>15373 >>15374 It kind of does using the duplicate system, but it's not very user friendly and not used for anything other than the duplicate filter at the moment, but you can still utilize it. You can kind of emulate parent/child relationship by marking the files as duplicate with one of them being the best one (parent) without deleting any of them. You do this by selecting your files, then you right click the one you want to be the parent in the selection a go in manage > file relationships > set relationship > set this file as better than the X other selected. Then you can view the files in the group by right clicking one of the files and going to the same menu and select view X duplicates or show the best quality file in the group or whatever other thing it offers to you, this should open a new tab with all the files in the group. You can also filter which files have a duplicate group set up using the file relationships system predicate, but it will only show the files that match the whole filter.
Is there any way to parse URLs from the filenames? I have some old files with IDs in their names, would be convenient if I could get the URLs directly from them without having to re-download them again.
>>15376 Either through the api somehow or you can temporarily export your files into a folder along with sidecars with the filenames, then you change them into urls using find/replace across multiple files using notepad++ or a script or something and import back as urls. Or do you mean you have the files already outside hydrus named with the ids? You'd have somehow make the txt sidecars with the urls inside using a script, or import into hydrus with the filenames as tags, then export again with the tags as sidecars and basically do what I said earlier.
(93.38 KB 843x617 2024-06-18_03-24.png)

(78.67 KB 893x645 2024-06-18_03-26.png)

>>15370 >is there a workaround Hell yeah! Install from source. It works like a charm every time. https://github.com/hydrusnetwork/hydrus/releases/tag/v578 https://hydrusnetwork.github.io/hydrus/running_from_source.html
>>15377 > import into hydrus with the filenames as tags, then export again with the tags as sidecars and basically do what I said earlier. I'm already doing this, but it feels pretty inconvenient so I wonder if there's a better way to do it. Thanks anyway.
I had a great week back into things. I improved UI quality of life with some better list workflow and regex editing, cut down on import folder inefficiency, and fixed an annoying problem in the known URL checking logic. The release should be as normal tomorrow.
>>15370 I do suffer from this, too. Not exactly the 3rd file, but some animated file, in preview mode. Occasionally, just highlighting the file makes hydrus freeze until a forced shutdown happens. Something broke my db a year ago, but I could fix that and over the different patches, the db is fine again. This problem, too, somehow became somewhat worse than in the past. Crashes feel like they happen more frequently. My session weight is abysmal, by the way...
>>15380 Only through scripting using the api, there's a hydrus python library for it. You could ask chat gpt to make you the script if you don't know how. I tried something similar before and it knows it.
>>15373 >>15374 >>15375 Child/parents relationships on boorus are more similar to 'alternates' in Hydrus i guess, which you can read about here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_examples_alternates I suggest to read the whole site including duplicates. I assume alternates are much less known in hydrus compared to duplicates, because they dont get mentioned really like in the options or when opening a new special page etc. Also you get the option to set alternates only when you selected more than one file at the same time. So when you select two or more thumbnails, you do the following: right click -> manage -> file relationships -> set relationship -> set all selected as alternates Once a alternate group is created on those files, you can select one of those and go to the 'file relationship' menu again, which will contain a new submenu 'view X alternates', which in turn will open a new page with all the alternates. The difference with boorus is i think, that all the alternates are equal in hydrus. There is not one parent and several children (if i remember these booru systems correctly). But neither is better or worse imo. It depends if there is a clear parent at all or not. Also i wouldn't use the duplicate system for that kind of stuff personally like quote >>15375 mentioned. Duplicates are more suited for finding better/worse quality of the same media. Read the link above, has good examples. Alternates can also be searched with the file relationship system predicate. But this is a good starting point to discuss the idea of having more right-click groups of that sort, which i will ask Hydev in a later post.
https://www.youtube.com/watch?v=qzTwBQniLSc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v579/Hydrus.Network.579.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v579/Hydrus.Network.579.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v579/Hydrus.Network.579.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v579/Hydrus.Network.579.-.Linux.-.Executable.tar.zst I had a great week mostly working on UI quality of life. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights There are several places where you can enter regex (clever text search rules) in the program. I have written a nicer text input widget and spammed it everywhere. It has several improvements: it colours green/red depending on whether the current text will compile; its menu button collects better tutorial links; and the one in the String Converter regex replace now shows how to do (unnamed) or <named> group replace. Many multi-column lists across the program have new scroll-to tech. If you import a new parser now, or edit one and change its name, then the list will scroll to the new item once done. Selection preservation through various updates is also improved. The 'manage URL Classes' dialog test input will also select and scroll to the matching URL Class. There's plenty more to do, but I've ironed out a bunch of ugly behaviour. I've improved some 'hey I think we already saw this URL before' logic, fixing an awkward false positive result. If you have had incorrect 'already in db' on slightly different versions of a file on a booru, and you traced it to a bad source url, they should work in future. Import Folders now work much more efficiently. Due to some stupid old code, they were wasting a bunch of CPU, particularly when the import folder was very large. If you have big import folders that are normally laggy to load, let me know how your background latency feels this week--it might be that I fixed some weird jitter/hangs here. I folded in the library updates from the 'future build' test we ran a few weeks ago. There were no problems with the test, so Linux and Windows users get a slightly newer version of Qt, and Windows users get new sqlite and mpv. Let me know if you have any boot problems! I also fixed a weird recent problem with the 'requirements.txt' that users who run from source rely on to set up their environment. There's a new version of numpy out that breaks some things, so if you tried to set up a new venv in the past few days and could not boot, please try again with this. next week Some more slightly-larger-than-small jobs. I have some weird sibling problems to fix, PTR janitor workflow to figure out, and awful sidecar UI to work on. I'd also like to get back to the duplicate auto-resolution system, but I think I'll be out of time.
Hello Hydev, i want to report, that it really seems that minimizing hydrus to system tray through the upper right X button is causing the client to freeze. I can leave the client open all the time, lock the screen, go to sleep etc. and always i am able to maximize the client again after i minimized it through the 'file -> minimize to system tray' menu. Never has it frozen that way. Now i tested it again, after having the client open for some days, i thought to minimize through the upper rigth X button and predicted that it might freeze when i do it. And that's exactly what happened. Had to close through Task Manager. I don't know if that helps you finding a fix, but to anyone who experiences that crash: try minimizing through 'file -> minimize to system tray'. I have some more bug reports/requests if you don't mind. Text Error: database -> regenerate -> the hover tooltips of 'tag siblings lookup cache' and 'tag parents lookup cache' both say 'delete and recreate the tag siblings cache'. Should be saying 'parents cache' for the latter i assume. Bug/Feature Request 1: 'Right-click -> manage -> file relationships -> view X alternates/duplicates' only appears in the thumbnail viewer, but not the media viewer. I think this is not intended. Let's say you have sorted/collected the thumbnails with 'collect by X' and get one thumbnail with a collection of 1000 pics, you open it and browse through them with the media viewer, then you see a pic of which you wanna see the alternatives from. You have to right-click -> 'open' -> 'in a new page' first (since closing the media viewer brings you back to the collection thumb and not the specific pic), and from a new page you can look up the alternates/duplicates by going through a right-click menu process again. So it would be faster with the possibility of opening the alternates/duplicates in the media viewer also. BUT i kinda feel that the 'view X alternates/duplicates' menus should be in the 'open' menu instead of the 'manage' menu, since you don't really manage them with that option but open them. The 'set file relationship' option should stay in 'manage' imo. Bug/Feature Request 2: When i 'sort by tags: number of tag' (button above search bar), it is only sorting correctly for the 'all known tags' tag service currently, no matter if i change the tag service to 'my tags'. 'My tags' sorts the thumbs the same as 'all known tags', but since they have a different number of tags, they are sorted wrong. I assume because in the background the system only considers "all known tags'. Now i thought that the little experimental 'tags' sort button might be responsible to make the sorting correct (button is only visible in advanced mode) where you can change the tag service for sorting. But this also doesnt work. Hydev, you once said this button is experimental, so i guess that's why it is not working currently? >>14626 >That thing is pretty experimental, I think it is hidden behind 'advanced mode', right? It changes which tag service the collections math works on. I did it with a user who has a very complicated namespace collection system, but we didn't go beyond this. I'll fix the checkbox thing, thanks. Just leave it on 'all known tags', which is the default, unless you want to go crazy. Maybe you can make it work in future if it isn't too much work? It would be nice to sort for number of my own tags. Or maybe im doing something wrong, idk? I know you can filter for a low/high number of tags with 'system:number of tags' but that's filtering, not sorting, so not the same. >>14810 >WARNING FROM HYDEV: Although hydrus has 'page' tech, and I even did this incremental tagger thing last week, I don't really like how hydrus handles paged content. I've never been good at handling it, hydrus is optimised strongly for single media, so: please do play around with this tech on a couple of comics, but don't jump in with both feet and import a hundred comics and committing to hydrus 100%. As a new user particularly, you might find it a complete fucking mess, and I broadly recommend just staying with ComicRack or whatever, which will have nice bookmarks and all the stuff you want for actually consuming a comic. Just as a thought experiment since you don't like how it is handled, i was thinking about if it would make sense to be able to open a "group/collection" from lets say a thumbnail that belongs to a comic or series, from the right click-menu just like alternates. Let's say you browse through random content and see a pic that should belong to a "group" like a comic: right click -> open -> group -> (here you can chose groups/collections of this file. The groups 'alternates/duplicates' should exist default if there are alternates/duplicates. Other groups could be created, like just 'Comic' or 'Comic - The Death of Superman'. Not sure if the names of the groups should be unique and exist only once in the whole client or there could be thousand groups named 'Comic' or what is technically required because of the database tech. You could group seasons of tv shows for example, name them 'Season 1/2/3' etc, making tons of 'favorite' groups without having to create rating services or rely on tags. Of course you would have to see a thumbnail first to be able to open the respective group first, or you could create a new system predicate and allow searching for groups names additionally. But tbh, i am also not sure if this is even required or if i will be very happy how Hydrus handles it at the moment once my client actually gets enough content. I cant speak from experience currently, so sorry if that sounds very inefficient for people with much more experience with Hydrus. For Comics it should theoretically be fine, since you could tag all the pages with for example a 'title:The Death of Superman' tag, but with paged content that doesn't have an official title, i am not so sure. You would have to come up with your own title tag to make the search effiicient. I always wanted to know how people handle 2 or 3 files of paged content that doesn't have a name, like little comics. You guys give them a random title tag or set the relationship as alternates (even though they arent technically alternates)? Maybe this 'group' idea is also against what you (Hydev) wanted to avoid in the first place. Maybe that would too much resemble having named folders in Windows. Maybe it would blow up the database more than i would think, or maybe barely? But since there is already a 'alternate' group kinda, why not allow creating more groups and allow us to name them (unique or not)? :P Would love to hear your thoughts. Have a nice week and thanks for the just posted update!
>>15385 >The 'manage URL Classes' dialog test input will also select and scroll to the matching URL Class. Yay. >If you have had incorrect 'already in db' on slightly different versions of a file on a booru, and you traced it to a bad source url, they should work in future. I don't really like it being added as a source url. What if the booru file is a smaller version, a gallery, or the source url is to another file on a booru?
>>15387 >to another file on a booru? I mean a file that this file is a derived from, not a copy.
>>15313 >>15352 15313 here. I realize there's someone else in this thread with seemingly a very similar issue to mine, so I want to make it clear I'm not that guy. I didn't state this before because I completely forgot about this like a moron, but before I noticed the massive slowdown and the transaction errors, but AFTER the multiple power outages, I moved Hydrus to a new SSD. I got a new M.2 SSD for Hydrus, and I initially tried to just copy/paste my main Hydrus folder over. However, I had an "input/output error" with the client.mappings.db database. I thought maybe the file was just too large to copy reliably (it was sitting around 56 GB), so I instead opted to try cloning the disk instead with BalenaEtcher. This worked fine, and Hydrus seemed to boot normally off the new SSD. Fast forward to me trying to sync to the PTR, noticing it slowing down immensely, noticing the repeated transaction errors in my terminal, and then making my first post here. I read "help my db is broke.txt" and ran integrity checks on all the databases, and they all returned ok. remembering my previous issue with client.mappings.db, i decided to just clone that one anyways to see if that would fix the problem. Hydrus then just returned a "database disk image is malformed" error. I ultimately ended up using ddrescue, which gave me a slightly smaller file. I tried copying this back into my db folder, but it also gave me a malformed disk image error. And in fact, now my original client.mappings.db file causes the malformed disk image error as well, so I guess now it's truly fucked. I do have a couple backups, one from before I moved Hydrus to my home server about 6 months ago, and a newer one from before I cloned Hydrus to the new SSD. However, this newer one was after the multiple power outages, and I can confirm it still has the "input/output error" when trying to copy it. If I point the Hydrus launcher at the old drive though, it does actually boot (if extremely slowly). At this point I'm not sure what to do. All I have are a bunch of questions and immense concern. Is it possible that when I cloned the drive, I cloned some sort of bad sector from the original HDD that was affecting the file? Can I use an old client.mappings.db in place of my corrupted one? What sort of data loss would that entail? Should I try dualbooting into Windows to run chkdsk on the original HDD? Alternatively, should I try a Linux method of some kind? Any advice would be appreciated. The only reason I don't just start over with the original Hydrus install from the HDD is that I had imported roughly 10k files between the clone to the SSD and now. It's not a major loss in the grand scheme of things, but I'd rather that be an absolute last resort option.
>>15370 I remember awhile back there was an old bug that crashes when trying to select animated files. It was a simple fix and something relating to mpv but can't remember the full details. This was way back in the early v500's like you mentioned(if not maybe late v400's). The dev quickly fixed it in the next update though. My guess is something probably didn't click or go through when you did a clean install. Sorry I'm not much help, its just you issue sound very similar to an old issue I had awhile back.
hi chief! how are you doing? could anyone tell me if there's been anything major after 542? that's the one i'm currently using
>>15364 >>15365 >>15369 Further status update just to round out this sad comedy, the all-signs-point-to-healthy SSD just catastrophically died [have I mentioned that I am an idiot?]. I was able to get the data there moved out, and will be shuffling things around to other drives in the next few days but that's not important. I only have the one important question, once I am done shuffling this data around, would it be okay for me to run integrity checks on the database files I've salvaged off the dead drive this morning -again- to check if they're functional, and if the integrity checks pass okay, can I consider them capital H Healthy and overwrite my backups with them up as usual, or should I abandon this ship completely and see if I can restore the weeks-old backup I have on my separate external instead? Apologies for my cluelessness and the constant back and forth it has caused.
>>15393 >the all-signs-point-to-healthy SSD just catastrophically died F
>>15378 Gave it a try and it did not work for v578 because of some numpy issue but it worked for v579 (mentioned in the 579changelog). Sadly i still have the issue with mpv (tried out "new" and "test" in the advanced setup).After provoking some crashes the following line seems to be always present in the most recent client log after crash: "[...]\hydrus-579\venv\Lib\site-packages\mpv.py:911: RuntimeWarning: Unhandled exception on python-mpv event loop: a bytes-like object is required, not 'str'" >>15382 in preview i can switch between a few (~5) files before it crashed so i disabled the preview. In the viewer im still stuck at crashing after the 3rd file though. >>15390 Funnily enough i never had a problem with the old version(s)
(516.49 KB 872x720 Y9q3GfW.png)

>>15397 >Funnily enough i never had a problem with the old version(s) Actually it is to be expected. It sounds incredible but is true. It happens that the library maintainers (DDL files in Windows), in their questionable quest for introducing new features, re-write those libraries all the time, then incompatibilities and crashes pop up and software developers like our devanon have to rush to try to fix THEIR mess. Note: software developers depend on those third party libraries (like "mpv") to write their software and are not responsible for their bugs.
is there some way to achieve this yet: https://github.com/hydrusnetwork/hydrus/issues/634 so that images with a tag are hidden by default until that tag is explicitly searched for?
>>15400 I don't think so, but as a workaround i can think of, you could create a new file service and MOVE (not copy) all files with the tag/tags you want to hide into the new one/ones. Then check off the boxes in the 'multiple/deleted locations' inside the file service button (the one where you chose your file service -> trash , all my file etc.), while you check the ones you want to show. But maybe there are smarter ways i can't think of and that's probably not the way you would want.
>>14340 This needs the "name" and "trip", too. "name" can contain things like "/r/" and "/d/" for requests and deliveries. I just added "4chan name" and "4chan trip" locally, but there is a 4chan-style parser, which needs different tags.
>>15370 >>15397 So i tried some more things and noticed that i completely avoid crashing when i deliberately arrange a page alternating between static pictures and animations(5 of each). Slideshows with 5s,1s, very fast and manually scrolling while in viewer did not result in a crash, same for preview. Seems like a possible short term solution/workaround would be to somehow forceload a picture after each animated file, if the next one is another animated file >>15398 Would be nice if they could maintain some compatibility with older version, some opt-out thingy or some kind of legacy release every now and then. Must be stressful as a developer to keep up with all the changes.
>>15386 >I always wanted to know how people handle 2 or 3 files of paged content that doesn't have a name, like little comics. You guys give them a random title tag or set the relationship as alternates (even though they arent technically alternates)? Personally, I always set them as alternates, and if they're more than perhaps three or four pages I give them a simple title. Hydev has said in the past that the "alternates" part of the duplicates system is kind of a holding area since it encompasses a lot of different things: these little groups of paged content, messy versus clean alts, WIPs versus full pieces, etc. At some point, "alternates" might be expanded on and have more specific names for different kinds of relationships. >grouping files In my opinion, the best solution is tech within hydrus to easily turn a series of images into a single zip/cbr that can be viewed without external programs. It doesn't make sense to tag each and every page of a 25-page doujin separately. It doesn't even make sense to tag both pages of 2-page twitter comic separately. These images should live and die together and be treated as a single piece of media.
>>15404 Sorry if you already mentioned it, but did you ever try checking on/off the checkboxed in the 'media playback' options (was called only 'media' not long ago i think in older version)? On top and bottom there are several checkboxes that you might reverse what's already there. Maybe one of those is causing your crashes. I would change one then see if it is still crashing. But restart hydrus after each change to be absolutely sure.
Are there other tag repositories besides the PTR? Especially for /pol/ related content.
(208.10 KB 419x424 thxfjcgzdrggnf.png)

>>15409 >Especially for /pol/ related content. That is very tempting... to be swatted by zogbots.
>>15356 >>15357 >So for example if I paste a hash of a file in file service "A", it will redownload the file and keep it in A, when I paste a hash of a file in "B", it will redownload and still keep it only in B despite the defaults being set to "my files". This is what I want btw. Aha, thank you, I think I see now. At the moment, the import pipeline is not sophisticated enough to handle what you want here, I think. Basically, the 'already in db' state stops all further import, no matter what it says in 'file import options'. So, in your case: (file import options set to import to A) - File is new to database -> 'new' -> added to A -> 'successful', now in A - File already in A -> 'already in db' -> no changes made -> it stays in A - File already in B -> 'already in db' -> no changes made -> it stays in B I will have a think and see if I can get the 'already in db' state to apply to missing destinations in the file import options. >>15364 >>15365 >>15369 >>15393 Sorry to hear about your issues. I know how it feels. >would it be okay for me to run integrity checks on the database files I've salvaged off the dead drive this morning -again- to check if they're functional, and if the integrity checks pass okay, can I consider them capital H Healthy and overwrite my backups with them up as usual If an integrity check turns up 'ok' when reading off a drive you trust, and preferably if you can boot into that client and do some searches and everything looks fine, then yes I think that is a trustworthy backup. Since you are in a delicate period right now, I think I'd say keep an extra copy of your original backup, not the media but just the four .db files, in a safe place for a few months while you get yourself sorted here. Good luck and let me know if you need any more help. I'm amazed that TRUNCATE was able to move things forward here. Perhaps the very serious errors you were getting were from failing I/O rather than actually damaged files. If you have good working .db files now, that's fantastic. Also check this out when the immediate pain in the ass of this has passed: https://hydrusnetwork.github.io/hydrus/after_disaster.html
>>15367 Yeah the macOS App is now >1GB, it is totally ridiculous. I think we have three copies of ffmpeg because of different library imports. I don't want the program to be so bloated, but it is python and we use an absolute ton of libraries for all the different tech, so PyInstaller just wraps it all up and bundles it in. Good luck using older hydrus, and I know some hydrus users have been working on hydrus-like projects that are intentionally slimmer or command-line only, so maybe some other solutions can come your way in future. >>15370 >>15382 >>15397 >>15404 Thank you for this report, and I'm sorry for the trouble. I hate it when the program crashes. It is usually my fault, because my code is shit in many places and I do something UI-related more dangerously than I should, but tracking down the exact issue is often very tricky. That said, mpv tends to be pretty stable on Win 11 these days, so I think I can suggest that this problem is more on the side of the dll and your OS. Running from source definitely helps here, and I think you might like to try replacing the mpv dll in the base install directory with something else. I get them from here: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/ So you might like to try one of the older ones, from 2022 or so, and see if that helps your stability. You could also try downloading an older Hydrus and pulling the mpv dll out from there. Note: - mpv dll name has changed over the past couple years. We now use 'libmpv-2.dll' or 'mpv-2.dll'. You would be deleting that and adding mpv-1 or libmpv-1 or libmpv-2 or mpv-2 or whatever you have from the older archive. Hydrus should be able to load it, no matter the name, as long as there is only one to load. - If you do this with the installer/extract, that's fine, but remember every time you update the new install/extract is going to replace your custom mpv dll with the normal one from the release. Either remember you need to switch it out every time, or run from source, where the mpv dll is all your responsibility. All that said--and I would be super interested if you discover a dll that works stably for you--this is also suspicious: >So i tried some more things and noticed that i completely avoid crashing when i deliberately arrange a page alternating between static pictures and animations(5 of each). That is definitely touching an area of bullshit in my code. Because of some crashy stuff related to destroying unused mpv windows, hydrus actually has a 'pool' of mpv windows and swaps them in and out every time you switch from one vid to another. You are touching this here, so even if it is the dll's 'fault' that you are getting a crash, it would be ideal if I had more DEBUG options around this behaviour so you could pull some levers without having to fuck around with custom dll files. Anyway, let me know how you get on! Stick with the silent native player if you can't get anything working, sorry in advance!
>>15411 >Basically, the 'already in db' state stops all further import, no matter what it says in 'file import options'. That's exactly the opposite, except one single case (maybe more) for some reason. >I will have a think and see if I can get the 'already in db' state to apply to missing destinations in the file import options. I mean, that's how it works almost everywhere already. The problem here is that with gallery downloaders it sometimes does and sometimes doesn't based on the selected downloader. Just try the test I prepared for you and you'll understand. If you are ever going to fix or modify the behavior, I think a good feature would be adding a checkbox to import options that will say and do something like "import to existing file domains and ignore the selection below if file already in db".
>>15373 >>15374 >>15375 >>15384 Yeah in general, 'alternates' are the place for non-dupe file relationships now. My dream ideal is to have something basically how danbooru handles file parents, and then iterate on that to add more semantically rich relation metadata like recognising (and even sorting) WIPs or messy/clean, but it'll be a huge job, as much work as duplicates was. 'Alternates' is just the 'landing zone' for unsorted related files until I get this work done so we can properly categorise and display this information. I had hoped to start this work this year, but I'm stuck in the mud with fifteen medium size jobs right now. We'll see how the second half of the year goes. >>15380 When sidecars is less of an UI shitshow, I want to write a new 'internal metadata migration' dialog that uses the sidecar 'metadata source' and 'metadata destination' panels and none of the actual sidecar file stuff. The idea is you'll be able to suck all the URLs for x y z files and spit them back out as tags or vice versa, or timestamps to notes, or anything else you can think of and for any metadata type I add to sidecars like ratings or whatever. This has been the plan from the beginning, but the whole system is still so user-unfriendly, I need to figure that out first, stuff like test panels and previews of output. Hopefully I can fit some of that in this week. >>15387 >>15388 Whether to include all source URLs is a tricky question. In general, my experience has been that the internet is full of all sorts of different crops and optimisations of files, and false positives/bad mappings will get into the file store no matter what you do, and thus it is impossible to aim for a perfect system. Given we will have a dirty storage, I'm generally of the opinion that trying to track how dirty a particular URL (e.g. by storing primary and secondary URLs as separate types) will mostly just make a system more complicated than the value is provides, and also, on the other hand, having lots of sources is useful in several ways so we probably do want to suck up what we can. So, I generally lean on the side of 'add all generally useful info' and throw it into the pot together. My downloader logic tries to separate the wheat from the chaff when it actually has to make predictions across the mess. I fixed that edge case last week, and I'm very happy with it since that problem has been bugging me in ways for years. Given a mature client that has seen files from several different sites, it tends to muscle through bad mappings and you end up with multiple copies of the same file, including the best or near-best, which can be a problem for the duplicate filter in future. As to your actual question here, if you want to turn off source urls, just hit up your default 'file import options' under options->importing, then under the 'post-import actions', uncheck 'associate (and trust) additional source URLs'. That will basically remove their parsing from the downloader.
>>15411 >If an integrity check turns up 'ok' when reading off a drive you trust, and preferably if you can boot into that client and do some searches and everything looks fine, then yes I think that is a trustworthy backup. >keep an extra copy of your original backup, not the media but just the four .db files, in a safe place for a few months while you get yourself sorted here. Thank you, I will follow this guidance and only return again if anything turns out to go awry with the integrity checks after the databases have been moved from its current temporary storage to a new virgin drive. On another note, thank you kindly for your work, hydev, I really appreciate Hydrus and all the effort that has gone into it, and it legitimately has made me rethink and improve a lot of my practices having to do not only with both with data/media curation and safekeeping, but also my standards for software I use and their documentation in general.
>>15413 Thanks, I'll make sure I reproduce your example so I see fully what is going on. I agree that checkboxes to alter the behaviour is the way forward here. >>15389 Hey, I am sorry for the trouble. You are the expert on your situation, but it feels by what you have written that you have some serious hardware problems here. When databases start giving different errors at different times, just like the other had been having, that can mean that the drive itself is having trouble and is basically giving the OS different data at different times. A joggy cable can do this, where random sparks from the bad connection are causing I/O errors and causing some sectors to be voided when actually the disk platter itself is fine. This error differs from 'my drive was doing hard work when there was a power cut and now there is a scrape on the platter'. So, I think your top priority is not to try getting your hydrus database working, but to absolutely totally ensure that your disks are fine. Run all the crystaldiskinfo type software you are familiar with and make sure everything is reliable, even if something in the actual hydrus databases is broken. Even if the disk is not broken, is the motherboard on the fritz? Is your OS currently sperging out with unexplainable 100% CPU and a damaged dll somewhere? Check that sort of thing first. For your actual questions: >Is it possible that when I cloned the drive, I cloned some sort of bad sector from the original HDD that was affecting the file? I don't think so. A clone is a completely fresh file made by reading the readable data from the source and writing it anew. It isn't a block-for-block copy. A clone performed on a healthy drive should always be uncorrupt. A clone that is swiftly malformed strongly suggests a damaged hard drive. Can I use an old client.mappings.db in place of my corrupted one? What sort of data loss would that entail? Yes, but there might be some problems. If you go through with this, run database->check and repair->repopulate truncated mappings tables. It isn't perfect, but it'll fix you mostly. Should I try dualbooting into Windows to run chkdsk on the original HDD? Sure, if you have that set up easy. Do everything you know how to do, but ideally I'd say don't try and use the original drive. If it is in a USB caddy or is otherwise no longer C: drive, that's great. But imagine the drive is being connected as 'read-only', even if it actually isn't--we don't want to use it. Alternatively, should I try a Linux method of some kind? I don't know enough about Linux. But if Wine or whatever fake windows terminal you have available lets you run chkdsk, and I sort of imagine it would, then I'm sure that's fine too. If you have new files that your backup doesn't, check out 'help my media files are broke' in the same 'db' dir as 'help my db is broke'. It talks a bit about how to merge and sync all that stuff. Let me know if you need any extra help figuring anything out, or if you do the client.mappings.db swaperoo and get some error popups (even if that happens in two years, which can sometimes happen with db files so big). >>15392 It has mostly been usability for the past year. Lots and lots of little improvements and UI fixes and stability and new buttons and stuff, but no major system overhauls. I recommend updating unless you are on a super old OS. I'm doing great! Buried in an avalanche of to-do work as always, but I'm still finding it all worthwhile.
>>15386 >i want to report, that it really seems that minimizing hydrus to system tray through the upper right X button is causing the client to freeze. I can leave the client open all the time, lock the screen, go to sleep etc. and always i am able to maximize the client again after i minimized it through the 'file -> minimize to system tray' menu. Never has it frozen that way. Thanks. I will make sure I put time into this this week, without fail. >Text Error: database -> regenerate -> the hover tooltips of 'tag siblings lookup cache' and 'tag parents lookup cache' both say 'delete and recreate the tag siblings cache'. Should be saying 'parents cache' for the latter i assume. Thanks, will fix. >Bug/Feature Request 1: 'Right-click -> manage -> file relationships -> view X alternates/duplicates' only appears in Thanks. I have slowly been merging the two menus. For a long time, just because I was bad at coding, these two had separate hardcoded menus. I'm slowly breaking them into bits and ensuring both places call the same merged code. That complicated file relationships menu will get merged in the future, not sure when but it will happen. Ideally both menus will both have exactly the same commands, where appropriate. Once all that shit is merged and modularised, I'll be able to think about some better user customisation of all that stuff. Nothing dramatic, but maybe some reordering or something. >BUT i kinda feel that the 'view X alternates/duplicates' menus should be in the 'open' menu instead of the 'manage' menu Yeah I hate trying to figure this stuff out. I'm terrible at UX and so often I'm throwing these new features in wherever they fit, and then three years later and it is a real mess. Again, I'm slowly rethinking my grouping/verbs/nouns here as I go through them. I moved the 'manage urls' from 'manage' to 'urls' just recently, and I think I like it. The file relationships menu is still an abomination, ha ha ha. There's like seventeen different things that can appear depending on what you right-click on. >Bug/Feature Request 2: When i 'sort by tags: number of tag' (button above search bar), it is only sorting correctly for the 'all known tags' tag service currently Thanks. I keep meaning to overhaul the sort/collect widgets. They are clustered right now, I'd love if it were all more dynamic so I could cram things in more efficiently. At the very least, I think it needs a cog menu button or something so I can fly-out the less-used options we'd want to add here but not leave them visible all the time. I also want to cram in secondary sort somehow here. I don't want to make a modal dialog, but maybe that's the KISS answer. For your specific question about sorting 'num tags' by tag service, the answer is going to be to add the same tag service button you see when you sort by namespace->blah-blah-blah. This works the same as the experimental tag sort you see for the collect-by, but applies to sort. I can't remember how to get it to display, but I thought there was a second, even more complicated tag-sort button or submenu that lets you change collect-by by storage/display contexts, but that's a step beyond and really is still experimental. Anyway, I'll see what I can do. > i was thinking about if it would make sense to be able to open a "group/collection" from lets say a thumbnail that belongs to a comic or series, from the right click-menu just like alternates Hell yeah, you are describing basically what I want to see. I want every thumbnail to know its file relationships (atm they are loaded on a single-file basis from the db on every right-click), and then be able to present itself in different ways. Sometime you want to see page 17 of a comic, but mostly you want it to be part of a chapter. That chapter is a single media object that just happens to be multipage. The same is broadly true of five different clean/messy versions of an image--you'll often want to see them as a contiguous sorted multipage unit while still having the option to see and track and tag and search them separately. So, ultimately I am planning to: A) Add CBZ/CBR browsing tech so hydrus understands what a multipage file is and how to browse it. B) Add 'virtual comic' tech that lets you compile a fake cbz out of single files and present that collect like it were a CBZ. C) Integrate a bunch of this into the alternates->file relationships expansion. These three systems are interrelated and will share a lot of tech, so they will all happen in fits and starts simultaneously, starting with properl cbz browsing. A two-page comic is similar to a WIP is similar to a 300 page volume. They all have related files and a certain order. I want you to be able to browse through your files and run across page 17 of a comic and then be able to 'rotate' the media viewer perpendicularly and suddenly browse through page 16, 15, 14, and then continue back on the original carousel you were on originally. Much like how in danbooru it will show a parent file link and preview in a normal file post, I want more visual feedback in the media viewer and the thumbnail view that files are parts of file relationship collections and let you quickly see them. So, same deal if you see a picture of samus that's 25% messy, you should be able to quickly hit a different left/right navigation and see the 0% and the 50%, and then go back to what you were seeing before. All this stuff should just werk, but it'll obviously be a huge lift getting there. Also we have questions about unsorted related files, like costume variants of an image or image pools. I'm open to it all, nothing yet is set in stone, but user-configuration will be foremost. Just got to put the current five dozen fires out and then reserve six months to make it the primary big job I'm working on.
>>15397 >"[...]\hydrus-579\venv\Lib\site-packages\mpv.py:911: RuntimeWarning: Unhandled exception on python-mpv event loop: a bytes-like object is required, not 'str'" btw this is interesting. The newer mpv dlls fucked with some core API responses recently and the guy who makes that mpv.py had to release a patch to address it. I wonder if this is happening for some other things--that's what that looks like. Not my code, but another area that using an older mpv dll might magically fix. >>15400 Not really. Tags are super complicated, and I don't think there is a way to make reliable tag hiding tech. There are just too many ways for weird edge cases to leak. This is basically, as >>15401 points to, why I made multiple local file services. That does let me very securely and reliably separate tag suggestions, so you can have a local file service for your family photos or whatever and you can type into an autocomplete focused on 'family photos' service while your sister looks over your shoulder and not worry about the results that come back. Making a 'nsfw/sfw' file service split is not uncommon. I don't like some of the workflow around here, so if you try it, let me know how it works for you. >>15409 >>15410 Vaguely related, but almost every 'character:ebola-chan' on the PTR was added by me, ha ha ha. There are some other tag repositories, but all private (i.e. just between a few friends) as far as I know.
>>15416 >Thanks, I'll make sure I reproduce your example so I see fully what is going on. I agree that checkboxes to alter the behaviour is the way forward here. Oh shit, I think I actually figured it out. You were kind of right about 'already in db' stopping further import being the default behavior, which actually happens if the downloader recognizes the file before downloading (like recognizing the url) and skips downloading it entirely. In the case of the gelbooru downloader, it recognizes the md5 hash, since the parser has a md5 content parser, which skips downloading the file if it matches and just gets the tags and import destination will be ignored. The danbooru parser also has a md5 content parser, but it doesn't seem to be working, so it downloads the file, which will be put in whatever is selected in import destination, then it will recognize the file as 'already in db'.
>>15393 >the all-signs-point-to-healthy SSD just catastrophically died This is why I don't trust SSD's for long term storage, they can fail harder than HDDs where if a HDD fails, you can still recover your files and even physically repair it. If an SSD fails, that's it. They're great for running your OS and games but for long term storage, just stick to HDD's. There's other issues with SSD's too from what I hear like not using your SSD for long periods could cause file corruptions or something. It sucks because SSD's and NVME's are everything you want out of a drive but at a worst cost. Alternatively you can just get an extra HDD and just set up Hydrus to always been cloned to it whenever you use it.
>>15370 >>15404 Me again. >>15412 Good news, i no longer have mpv related crashes on neither the installer version and the built version from source but i have no clue why. To summarize: -Win10 -Python 3.11.9 -Hydrus v579 source files - tested mpv-1-dll (~2021), mpv-2.dll (~2022) and libmpv-2.dll(~2024) -simple install no crashes with any of the .dll it just works. Just to see what happens i updated the v578 installer version (with all my stuff) to v579 using an installer again - no crashes either. Again for fun i used the same v578 installer in another location and again no crashes on a fresh v578. Thanks for the guidance/assistance everyone.
Drag and dropping img links from 4chan return a 403 error for me now
>>15422 Nothing quite like waking up to the unearthly sound of your HDD making a noise like a car door being fed through some giant paper shredder, though, which is definitely one of the ways it can decide it wants to go and I'm pretty sure it has meaningfully reduced my potential lifespan last time it happened
>>15427 Yeah, but you have a chance after trying hard enough that that disk will start without noises giving you the opportunity to transfer the files to another disk.
>>15419 Yep, fixed the md5 parser and it's now behaving as the gelbooru downloader, ie. it will detect the file before downloading and won't add it to any new file domain. Here's the clipboard for it: [30, 7, ["md5 hash", 15, [27, 7, [[26, 3, [[2, [62, 3, [0, null, {"class": "image-container"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]]], 0, "data-file-url", [84, 1, [26, 3, [[2, [55, 1, [[[9, ["^.+(\\w{32})\\.\\w+$", "\\1"]]], "https://cdn.donmai.us/original/e5/af/e5af57a687f089894f5ecede50049458.jpg"]]]]]]]], ["md5", "hex"]]]
the idea's great but why the fuck is it written in python? are there any plans on rewriting it in a faster language?
>>15430 >why the fuck is it written in python? It's a GUI program, not video game that needs to run at 120 fps. >are there any plans on rewriting it in a faster language? If you're asking this I don't think you know anything about programming.
>>15430 Hydrus is open source with the DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE, Version 3, May 2010. Feel free to re-write it in C or C++, or if you are one of those obnoxious sodomite troons, in Rust.
(299.70 KB 2560x1440 DVs2Ofd.jpeg)

(417.00 KB 640x640 based department.gif)

>>15417 >almost every 'character:ebola-chan' on the PTR was added by me Based.
Hey dev, I want to fix a few of the core included downloaders. Should I just upload the PNGs ITT with patch notes? Or would you prefer a pull request on GitHub and a link to it?
(11.44 KB 512x192 hydrus.png)

>>15434 Here it is: >create generic shimmie gallery parser >remove dead test links, add official demo instance and a featureful Danbooru*-themed instance for test variety >use the nav Next link instead of paginator Next (paginator Next not present if booru uses a Danbooru* theme) >change tag selector to ignore Related Tags (if enabled on booru, see test case) >make namespaces ('Tag categories') work >fix broken MD5 hash regex for rule34.paheal.net <i am not able to test rule34hentai.net due to Cloudflare, please test before merging Assuming it works for rule34hentai.net, this makes the paheal and rule34hentai parsers redundant, so they can be deleted.
(1.93 KB 132x20 24-21:09:48.png)

This is a super minor thing, but I don't think this is supposed to happen. I don't see anything in the logs so I assume nothing is severely bugged.
>>15389 >>15416 I booted into Windows and ran chkdsk on the old drive. I don't know if this was the case before, but afterwards the client.mappings.db file on the old drive reported a size of 0KB, so I think that file is just completely toast. Considering that, I decided to just go grab the old client.mappings.db file from my other PC from before I moved Hydrus to my server PC. And now I think it's been fixed? I ran the "repopulate truncated mappings tables" option, which came back with around 9000 results. I'm pretty sure this is less than the total amount of files I've imported, but then again It would've likely been mostly tags under the "filename" and "folder" namespaces, so It's not a huge loss. I then checked for orphan files, since I'm pretty sure I used this old install for some quick import tests in the past on my other PC, but it came back empty, which is actually good I guess. As for hardware problems, I run a whole media suite (plex, the different -arrs, qbittorrent, etc.) off the same server PC, and I haven't had any issues with them at all. Just in case though, I reseated the NVME drive and blew some air into the connector in case of errant dust. I did the same with the SATA cables for the old drive as well, though I don't really plan to make much use of it anymore regardless. Next time I open the PC I'll probably just pull it out. Right now I'm running PTR processing which so far is going just fine, I've definitely gotten a lot further into it than I did before. If I don't run into any more issues, I'm going to CLEANLY shut down Hydrus and copy my database files to multiple locations/other PCs. Save myself the headache of having to do all this again. Speaking of database files, I think I figured out why my error went from something that could boot but slowdown/freeze eventually to an unbootable "database malformed" error. I had been performing all these operation on the main client.mappings.db file, but I completely neglected the related ".db-wal" and ".db-shm" files. I think at some point I likely carelessly deleted, mixed up, or otherwise modified the files, which I think probably resulted in the worsening of the error. (and I don't know anything about sqlite3 or WAL journal mode besides what I just googled, so I could be completely off the mark here.) Absolutely thank you for your help and responses. I don't think I would've kept on trying to fix this if I didn't have the support of a developer who is actually responsive, helpful, and could actually provide guidance regarding my issue. You rock!
I had a good week working on a mix of stuff. There's a new maintenance job that recalculates the presentation and counts of individual tags, some UI fixes and a couple clever shortcuts for QSS refresh and ICC Profile switching, and some fixes to unusual file import problems. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=BvfHlZ8QRaI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v580/Hydrus.Network.580.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v580/Hydrus.Network.580.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v580/Hydrus.Network.580.-.macOS.-.App.dmg linux tar.zst: https://github.com/hydrusnetwork/hydrus/releases/download/v580/Hydrus.Network.580.-.Linux.-.Executable.tar.zst I had a good week working on a mix of stuff. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights I may have fixed a program freeze when minimising to tray via the close button (the settings for this are under options->system tray). If you have had trouble with this before, please, when you are at a convenient point to risk a hang, try it again and see if you have trouble. If you do, what happens if you minimise to system tray from the file menu--still have problems, or is that reliably fine? I added a new maintenance command to tag right-click, 'regenerate tag display'. This is a catch-all job to fix bad autocomplete counts and sibling/parent display. Previously, the way to fix this was to hit a 'regen the whole cache' job under the database menu, but on large clients this could take ages--we now have a way to debug and simply, fingers crossed, just fix a bad tag, all in, typically, a few seconds. If you have any jank siblings or 'the autocomplete said (23-28), but it gave me 8 files', try it out! I fixed an issue with temp-file cleanup after importing read-only files. If you do a lot of unusual/misc hard drive imports, you might like to shut your client down, check your temp folder (hit _help->about_ to find it), and delete anything called 'hydrusXXXXXXXX'--it is all useless cruft after the client is shut down. It shouldn't come back! The new 'eye' icon in the media viewer's top hover window now has the 'apply ICC Profile stuff' option. It updates the image live, as you click it! If you are interested in ICC Profiles--and perhaps comparing this in the duplicate filter--check it out. I also added a shortcut for it, to the 'media viewer' set. The 'file log' has a new 'search for URLs' menu command, which lets you explore what has the selected URLs mapped. It is basically a copy of what I added to the media right-click 'urls' menu recently, and it'll help figure out weird URL collisions and mis-maps. The Client API '/add_tags/add_tags' command has a couple of clever new parameters to govern how deleted records are navigated. If you are doing clever migrations or other big mappings operations with the API, check it out. next week I want to get some repository janitor workflow stuff done.
>>15441 >important but subtle file import options fix: when you set a file to import to a specific destination in file import options, or you say to archive all imports, this is supposed to work even when the file is 'already in db'. this was not working when 'already in db' was caused by a 'url/hash recognised' result in the downloader system. I have fixed this; it now works for 'already in db' for url/hash/file recognised states. thank you to the user who noticed this and did the debug legwork to figure out what was going on Shit, that's exactly the opposite of what I wanted... I know it's basically a bug fix, but there are cases where I want a downloader to be universal, instead of having x downloader tabs set up for every file domain or manually deleting wrong file domains (which flags the file as deleted forever). Checkbox when?
Thanks for your hard work, hydev. I have a suggestion that you may have already heard: media viewer "profiles". You could create ones that have different default zoom levels, different shortcut sets activated, etc. And you could switch between them easily within the viewer. Also, the ability to prevent scrolling past the top/bottom of the image.
Ok, my lazy ass damn near ran out of space on my mass storage drive, so I am going though and importing a bunch of image archives to hydrus so it moves them to the image drive I have noticed that importing this way seems to stop at about 4000-5000 images processed, is there any setting that throttles how many images can be imported in one go though folder imports? also, this is more just a piece of mind thing, I like having records of what's being imported so I can see why something may have gotten disqualified. the last folder I imported had I Believe 4300 images processed, 920 came in, and the rest went straight into the recycling bin. this is an artist that I know I downloaded a lot of their crap, I know that it is probably already all in the program, but is there any way to see the file logs of folder imports? and if there isnt, is it possible to have logs added? I know this would probably take up some degree of ram/add to weight, so a way to purge this once looked though would be nice.
>>15441 >I may have fixed a program freeze when minimising to tray via the close button (the settings for this are under options->system tray). If you have had trouble with this before, please, when you are at a convenient point to risk a hang, try it again and see if you have trouble. If you do, what happens if you minimise to system tray from the file menu--still have problems, or is that reliably fine? Hi Hydev, i reported this one. Yes, it seems to have fixed it as far i can tell from the few days i have been able to test this latest version. Awesome! Originally i had only checked the "close the main window to system tray" box and it crashed when pressing the X on top right. Now it seems fixed. What was the problem and how did you find the solution :D? But i also made sure before i installed the latest version, that i try the "minimize the main window to system tray" option, in case the minimize button was also affected by the crashes, so i checked the box and even after one night and some wild minimize & maximize spam, it didn't crash. So i guess this option was always save before too. The X button (close to system tray) one i could always crash reliably before. One quick question for you: Could there be a problem using emoticons (windows key + . (dot)) or other symbols (chinese, japanes etc.) in tags in the long run? Would you say there is a chance that those could cause problems or not at all? And if not in Hydrus, maybe when copy & pasting them somewhere else, or when using sidecars? One example. One image album has a heart emoticon as title ( ๐Ÿงก <- let's see if this even shows here). would i be able to find it in the future even if microsoft changes the designs or are they somehow universal and there is little chance they won't work in the future? Weirdly enough in Hydrus the standard red heart doesnt show as red but rather white/colorless/black and is smaller. Same here: โค So i am not sure how reliable all those symbols are, please some suggestions how to handle this. Sorry i have no clue about this :P >>15444 >also, this is more just a piece of mind thing, I like having records of what's being imported so I can see why something may have gotten disqualified. the last folder I imported had I Believe 4300 images processed, 920 came in, and the rest went straight into the recycling bin. this is an artist that I know I downloaded a lot of their crap, I know that it is probably already all in the program, but is there any way to see the file logs of folder imports? and if there isnt, is it possible to have logs added? I know this would probably take up some degree of ram/add to weight, so a way to purge this once looked though would be nice. when manually dragging & dropping a folder into Hydrus: directly above the progression bar on the right, there is a 'file log' button with an additional arrow down button with additional options. the file log gives informations and status and with right-click on the entries, you can chose to open the selected files in a new page or more under the 'whole log' submenu in the right-click menu. Is this what you looked for? when using the automatic import and export feature: file -> import and export folders -> manage import folders... -> double click (or edit) entry you want -> under the checkboxes there is a 'file log' button which has the same options as i mentioned above. is this what you were looking for? the question about why it stopped at about 4000-5000 images, i cant answer. but you can right-click on the ones that didn't import and 'try again' for example.
>>14275 As of now, my db is over 8 years old and has over 28k files, all manually tagged by yours truly
>>15448 How the fuck do you even manage to tag stuff properly and consistently, I tried tagging things myself and ran out of steam after three images or so
>>15448 I kneel tagger-sama >t.only tags things as 'reaction image', 'meme', 'character:xyz', or 'series:abc'
(96.89 KB 1366x735 2024-06-29_15-42.png)

(377.24 KB 1366x735 2024-06-29_16-01.png)

>>15449 >How the fuck do you even manage to tag stuff properly and consistently, Different anon here and I tag 100% manually as >>14275 and >>15448. Currently I have 38K files with only 50% properly tagged (fully tagged and already send to archive). The answer is just to begin small and from there add little by little more tags that describe better what you have. Soon you will develop your personal system and surely find yourself deleting 100s of tags to replace them with others that suits you better. In any case you will need time and a lot of autism.
>>15419 >>15429 Thanks again for your work here, you saved me a ton of time. The 'already in db' logic and how it applies post-import content updates should all be fixed in v580 now, let me know if you still have any issues. >>15422 >>15427 >>15428 In my experience, early SSDs were terrible and I had some that just died after a year and others that lasted ten. It was often related to bad TRIM tech. I haven't had a single SSD problem in the past, I don't know, eight years or so. I think a lot of maintenance tech improved along with write reliability overall. I may just be luck though. I tend to just buy stock machines with simple non-super-gamer-mode basic NVME system drives now. That said, no matter what medium you are on, you can always have a house fire or a nephew throw orange juice all over it. A weekly backup trumps all hardware problems. >>15423 Great news, thanks for letting me know. I just got a new nvidia driver on my main vidya machine and I played an mp4 with MPC-HC right after and it caused my whole display to crash and recover, with a second of terrifying jaggy static audio. I then did it again and it happened again, and then I did it again and it was completely fine thereafter. I think GPU tech is just crazy and sometimes things are just in conflict for whatever 'it gives us +3% performance' reason, and then a patch is activated or some internal error is tripped and it switches driver flags, and it all works again. Let me know how you get on! >>15426 This is usually cloudflare. Try hopping VPN, or use Hydrus Companion to copy your User-Agent and cookies over from your browser to hydrus. >>15430 I used to be a C++ guy, funnily enough, about fifteen or twenty years ago, but I ultimately became fond of python simply because of how easy it is to write. Although there are some limititations on speed and it bloats the hell out of the install size, it allows me to rapidly prototype things and there are some great libraries that just add xyz support in a couple lines. I love not having to deal with memory or pointers, and stuff like list comprehensions are delightful. I'm not up do date on most modern coding though--I understand C# and other new languages have integrated many of the cool python features, so I can't claim I am doing anything but what is personally comfortable and what I have actually experience with. The good news is that most of the heavy grunt work in hydrus, stuff like image rendering, is actually done in the C++ level in libraries like numpy (the code kind of 'jumps down' to C++ when it hits certain dlls). This tech can also work multi-core, which current normal python can't do. When you encounter slow shit it hydrus, it is almost always due to me writing bad code, rather than it being python's fault. Usually UI code that isn't being async like it should. Let me know if you run into problems in a particular area, and you can often run help->debug->profiling to help me figure out what I did wrong.
>>15434 >>15436 Thank you for this, I will check it out and roll it into the defaults! That's neat if it works for rule34 and paheal too! Posting here is always good. I have to do a couple things to ensure the new default gets overwritten in the update, and I generally like to check what is being added since sometimes the 'hey here is a really cool downloader' that someone posts actually has some advanced stuff that isn't appropriate for all users in the defaults, so I may have to slim it down first. If you have a cool downloader that isn't appropriate for the defaults but you want to share it anyway, you can also do a pull request here, which is user-run and has a billion different downloaders: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts >>15437 Yeah, this is intentional, but I don't know if I like it. A user mentioned that according to style guides I should show the 'undo' menu even if it has nothing to show, and I did it for the pending menu too, but I think it looks dorky. I'll probably write a checkbox to let you hide them again. >>15438 If you are back to working, that's fantastic. The shm and wal files are usually only there when you are running the db and are removed on clean exit, so you usually only have them hanging around on a crash etc... They aren't a huuuuge deal, normally. If they exist on boot, I understand that SQLite will look at them and say 'ok, looks like we have a crash last time, is there any data that was supposed to be written to the real db that I can do cleanly, or should I throw whatever is here away?' and figures it out. If your wal files got swapped around, I think that might have fucked with SQLite, yeah, but I can't be sure. I think the shm one (shared memory) is basically a place for locks for parallel acces, which doesn't matter for us, but the wal is the write-ahead log and is basically where shit about to be written to the database is stored. Let me know how you get on in future. You sound like you are focused on a good backup routine, but I'll post this anyway for you to check out: https://hydrusnetwork.github.io/hydrus/after_disaster.html >>15442 Sure, no problem. Now this code is actually properly written out instead of sort of 'accidentally running sometimes', I can wrap it in an easy yes/no and stick a checkbox on it.
>>15443 I haven't heard it exactly like this, and I think it is a great idea. I feel like I am finally getting on top of some background code cleanup around here, and now I am adding new stuff via this new 'eye' icon. Having some preset profiles you can quickly load up would be a great thing to add. First, I think, is going to be some new zoom options. Lots of users want the ability to lock zoom between files and stuff, so I'll see what I can do. Bounding pan would be neat too. I can't promise this will come quick, but I'd like to keep pushing in little waves. >>15444 >>15445 Yeah, this is very strange, an import folder shouldn't stop at 4000 files, and I know plenty of users who have used them to figure out 500,000+ file situations (import folders use far less UI resources than a normal import page, so they are useful for giganto jobs). I think, yeah, check out your 'file log' in the import folder edit panel itself, and you'll see more on what is going on. My guess is you'll have 30,000 items or something, all set 'skipped' because of some weird error. Or, if they ended up in the recycle bin, then I would think they would have hit one of the rules in the actual import folder UI, like 'if the file is already in db, delete it'. It might be worth undeleting those files and trying a normal manual page import to see better what is going on. Let me know what you see. >>15445 >What was the problem and how did you find the solution :D? This whole time I thought the problem here was with the 'hide the UI and update the system tray icon' code, but since you mentioned it was with the close button only, that explained why I wasn't able to reproduce the problem (I was always testing it through the file menu). It turns out in the tiny little event handler where I catch the close button click event, I was firing off the 'minimise to system tray' command but then was leaving the event handler without telling Qt 'I caught this event, take no futher action'. My guess is that Qt was thinking the close happened and getting into a state where it was half closed (since my real exit code also happens elsewhere), probably the main event loop was exited but the UI wasn't deleted, something like that. So, when you reactivated from the system tray (although I don't know how that would work with a dead event loop, but who knows, maybe that works immediately off the click somehow), it was frozen because no events could be processed. Anyway, as is often the case, it was one dumb line, 'event.ignore()' and it was fixed. >Could there be a problem using emoticons (windows key + . (dot)) or other symbols (chinese, japanes etc.) in tags in the long run? I don't know really. My general philosophy has been to be maximally supportive, so if it is unicode and doesn't break my basic rules (there's some stuff like a tag can't start with a hyphen or 'system:'), I clip it to 1024 characters for safety and just save it. We've got a load of weird kanji, hangul, chinese, and emoticons, and these new combined emoticons (like if you combine girl + elf, in some render engines it'll render 'elf girl' instead of two characters, or now I think of it I think that's how the coloured heart works), mostly through parsing some weird bit of title html here and there. Now that we are in python 3, where unicode is a delight to work with, and now that basically every OS and other piece of software will eat UTF-8 no problem, there basically aren't any technical problems with this. If you paste ZALGO or other bullshit into the client, it'll eat that and render it all fucked up, but it works. 99% of the time, I just see it as a string. Will unicode change in future, and will these custom emojis change? I don't think they'll change much, since most of this is baked into the official unicode standard now. These custom renders might change. But I imagine the way it will go is we'll have ten new ways to say 'pregnant male' so as to include x racial group or y sexual orientation or z allergy sufferer. Overall, I think I don't recommend these characters, but I'll save and throw them at the render engine, so feel free if you like them. If Qt and/or your OS decides to draw them different in ten years, that's a risk, but I don't think it'll be a technical problem ever. If there ever is a unicode 'schism' where we decide it is all too much, I imagine there will be (if there isn't already), a nice way to filter out bullshit characters and just stick to normal human letters and logograms. I also don't like them since typing them (and thus searching with them) is a pain. Tags are for searching, not describing!
(410.07 KB 165x494 too based.gif)

(397.29 KB 2048x1269 this.png)

>>15452 >A weekly backup trumps all hardware problems.
I've imported some files from a variety of downloaders and would like to set their modified time to their earliest web domain date, so that they're essentially sorted in order of first publication. Is this currently possible? (I'm willing to write basic scripting)
>>15453 >Sure, no problem. Now this code is actually properly written out instead of sort of 'accidentally running sometimes', I can wrap it in an easy yes/no and stick a checkbox on it. Sounds good! I'll wait for the next version before updating then.
I don't know how to interpret this at all.
>>15461 Post Hydrus version. Better yet, the information found at Help>About.
>>15448 Your DB and file accumulation rate are about the same as mine. I had about 24k files after 8 years. I have now spent the past couple years tagging and have tagged 28k files. I added roughly 11k in those past couple years though because Hydrus makes it easier to collect. I track my file processing and downloading, and I'm keeping ahead of my download pace. Might catch up within the next year. Gonna suddenly have a lot of spare time then.
>>15451 >Siblings In my perfect hand made garden, there are only ideal tags. With maybe a couple exceptions for things I keep misspelling like Yokai/Youkai Watch. I tag files in batches of about 200-400 at a time before archiving them. Where was it that I could view my total tags and tag relationships again?
regarding sankaku, is there an alternative downloader since hydrus is unfeasible for this?
I found a problem with the "neighbor spam" check. It seems like it's incorrectly denying file url checks as being correct, even when each file url is only mapped to one file. The post url does contain multiple files, but in the url class, it says it does, so I expect hydrus to still check file urls and avoid redownloading if it sees that url on a file. But it doesn't. it downloads the file every time and ignores the file url being present already. Turning off the neighbor spam check fixes this issue, which is weird since the neighbor spam checkbox only mentions checking post urls, not file urls.
>>15469 this is just a guess, but since this is the first time I'm seeing this issue, and I'm on v579, maybe it the issue has something to do with this change you made in v579: >I've improved some 'hey I think we already saw this URL before' logic, fixing an awkward false positive result. If you have had incorrect 'already in db' on slightly different versions of a file on a booru, and you traced it to a bad source url, they should work in future.
>>15469 >>15470 I should probably also mention that I'm having the issue specifically with the kemono.su downloader which uses an api, then associates the original post url, if that makes a difference.
>>15454 >So, when you reactivated from the system tray (although I don't know how that would work with a dead event loop, but who knows, maybe that works immediately off the click somehow), it was frozen because no events could be processed. Anyway, as is often the case, it was one dumb line, 'event.ignore()' and it was fixed. I always find it strange that bugs like this one don't happen every single time. With this bug, it only happend more likely (or very likely) after it was minimized for some time or after minimizing and then maximizing several times in a row, then out of nowhere it randomly crashed. In the future when AI helps you tracking them down, i hope you will be able to spend less time fixing those annoying bugs. _ Regarding emoticons: i've tested alot of the smiley and heart ones mostly. can you explain why very few show somewhat wrong in hydrus and also here. For example those show colorless/black/less detailed -> โค (default red heart) , โฃ (red heart as exclamation mark). they both show correctly here though when you type it direclty after another symbol without whitespace like this ๐Ÿ’”โฃ ,๐Ÿงกโค but not in hydrus. also not in latest chrome, but here this workaround is possible because of my old firefox version im using right now, so it won't work for you i guess. other emoticons (which are also working without whitespace) are the smiling face โ˜บ next to ๐Ÿ˜š๐Ÿ™‚ and a very sad face โ˜น next to ๐Ÿ˜ฒ๐Ÿ™ some ultra weird behavior: when i type the smiling face first and the very sad face directly after that without whitespace, all others after that also stop showing the detailed version until i hit one that stop this behavior (only old firefox) -> โ˜บโ˜น๐Ÿ™๐Ÿ˜’๐Ÿคค๐Ÿ˜œ๐Ÿ˜ฏ๐Ÿ˜ซ๐Ÿฅฑ๐Ÿ˜ฃ๐Ÿ™‚๐Ÿ˜š๐Ÿ˜ช๐Ÿฅฑ๐Ÿ˜Œ๐Ÿ˜Ÿ๐ŸŽช Ok i get it, different programs show them differently and per version, and the behavior in hydrus seems to be dependend on Qt as you said. But why is it that nowhere it is possible to render the 2 smileys (smiling face + very sad) and 2 hearts (default red + exclamation mark) just as Windows shows it when pressing win+(dot)? It is really not that serious, but im just interested and you can explain it nicely :) _ It seems hydrus doesn't support .url files. Wouldn't it be very easy for you to support them? Like clicking on one and then show the 'open externally' button which calls your default browser. And since all of them have obviously a URL: , you could fetch/parse the url easily so it could show with right-click -> urls, so they could be searched for easily with system:urls too. I got alot of .url files saved throughout the years because it's not always worth saving huge files locally. Also tagging isnt that good or doesnt work at all in browsers. Also privacy. Thanks and have a nice week!
>>15474 >๐Ÿ’”โฃ ,๐Ÿงกโค ok in chrome the second of each shows wrong as expected, only in my old firefox (maybe new too idk) they show correctly with that workaround. >>15474 >โ˜บโ˜น๐Ÿ™๐Ÿ˜’๐Ÿคค๐Ÿ˜œ๐Ÿ˜ฏ๐Ÿ˜ซ๐Ÿฅฑ๐Ÿ˜ฃ๐Ÿ™‚๐Ÿ˜š๐Ÿ˜ช๐Ÿฅฑ๐Ÿ˜Œ๐Ÿ˜Ÿ๐ŸŽช here also the first two show wrong in chrome. the weird behaviour only shows in firefox where it shows the first eight emoticons wrong. Anyway, my question in the last paragraph stands. Why aren't the 4 emoticons -> 2 smileys (smiling face + very sad) and 2 hearts (default red + exclamation mark), not rendering correctly anywhere?
What does this metadata even describe? Is it worth keeping it around?
>>15480 Dots per inch. Google it. "If an image has a resolution of 300 DPI, this means that every inch contains 300 dots of ink. Photographers and graphic designers typically use 300 DPI as a benchmark for printing high-quality images." It determines the quality of a print. The higher the denser. If you print something you don't print pixels, but dots. With lower dpi an image is printed bigger on a sheet of paper when printing in 'original size'. For example: image with 1000x1000 pixel resolution and DPI (dots per INCH) with 1000x1000 will print with the size of 1 x 1 inch (or 2.54 x 2.54 cm) on the sheet. If you halve the DPI to 500x500, it will print with the size of 2 x 2 inch (or 5.1 x 5.1 cm). Of course printer settings allow you to stretch/resize and whatnot anyway. Same goes with scanning. The higher the DPI settings in your scanner, the bigger the filesize and resolution, because the sheet size is already a given with the A4 format normally. Also when creating PDFs out of images, the image sizes can differ hugely if they don't have a similar DPI, even though they have the same resolution. Because the PDF viewer (for example in whatsapp) determines the display size of the images according to the DPI/resolution ratio. Is it worth keeping around? It is embedded metadata, imported with the images. So how can you not keep it around?! You would have to export images which have a set DPI, then find a tool that can strip/delete it, save the image and then import it again, which wouldn't be recognized since the hash changes i guess. So i guess nope.
>>15481 Well yes I know what dpi is generally, but does it do anything for pictures in Hydrus is what I meant. I have a lot of pixel duplicates where the only difference is a dpi entry and ~0.1 kb size difference and I'm wondering if it'd be better to keep the picture with the dpi metadata vs the one stripped of it.
Tell me why converting all my pngs to lossless webp is a bad idea. Also, do you keep jpg or png when you can't tell the difference between the two? But, like, you're not sure if they're nigh identical since they're not pixel dupes.
>>15484 >But, like, you're not sure if they're nigh identical since they're not pixel dupes. If they're nigh identical and same dimensions, I keep the smaller filesize.
>>15484 If you use the PTR then converting them to a new format means they would not be tagged in the PTR. So I would say do not do that if you use the PTR and care about things being tagged. If you don't use the PTR (and don't plan to) then there's no real harm in it. As to similar files, in that situation I choose whichever one has more PTR tags. That file is more likely to get updated in the PTR. If there's no PTR tags I would keep the one in the better format. If the same format, then the smaller one.
When permanently deleting an image it is put into an in-between state until the client is restarted - The trash icon is absent and the image remains in the cache until pushed out. You can see this when using the "all known files with tags" view where confirming a permanent deletion will remove the image from view but refreshing the page will bring it back.
>>15484 >Tell me why converting all my pngs to lossless webp is a bad idea. because the disk space savings are so tiny it's literally not worth your time. you'd have to export all the images, convert them, and then reimport and have to go through the duplicate filter or whatever to copy all your shit over to the new one.
>>15482 Depends on your level of autism. If you like it clean, take the one without dpi metadata. If you ever plan to print something, or do something with that file outside of hydrus, that relies on dpi, then you can keep the one with dpi. It's not like an image needs that value and it can be changed/set with programs like irfanview in case you ever need it. Since you cant search for dpi values with a search predicate within hydrus, it doesnt matter. Maybe it will support dpi search in the future? In your second pic, it says 73 tags > 7 tags. Wouldn't it be better to take the 73 tags one? I know you can merge them, but regarding PTR, isn't it like the ones which have more tags, will get updated more likely in the future too? Because that is a file/hash more people have and so the chances are higher? Just like >>15486 said. I'd take the 73 tags one therefore. >>15487 Correct me if im wrong, but as far i remember testing, restarting does trigger some maintenance jobs immediately and therefore you see that behavior. If you let the client idle for some time, maybe not using the computer at all (while it is still on obviously), those jobs start also and you will eventually see them deleted just like after restarting. It's on purpose because hydrus doesn't want to take resources away while you might need them or to stay snappy. Probably you can even force some maintenance jobs like this without restarting. Hydev might answer this. >You can see this when using the "all known files with tags" view where confirming a permanent deletion will remove the image from view but refreshing the page will bring it back. Keep in mind that even when deleting a file permanently and not leaving a deletion record, the file will still be in the 'all know files with tags' location, when the files had tags. Even after restart. The thumbnail might get blurry (-> space saving blurhash) after restart directly, but that will happen also after you leave the client idle for some time as i said before, at least im really sure about that. The permanent deletion doesn't wipe the records completely, even when not leaving a deletion record. For reference: https://hydrusnetwork.github.io/hydrus/faq.html#does_the_metadata_for_files_i_deleted_mean_there_is_some_kind_of_a_permanent_record_of_which_files_my_client_has_heard_about_andor_seen_directly_even_if_i_purge_the_deletion_record "Yes. I am working on updating the database infrastructure to allow a full purge, but the structure is complicated, so it will take some time. If you are afraid of someone stealing your hard drive and matriculating your sordid MLP collection (or, in this case, the historical log of horrors that you rejected), do some research into drive encryption. Hydrus runs fine off an encrypted disk." It doesn't say if it does finally when you delete the tags or if remnants like a hash will always stay in the database no matter what, even after deleting the tags. Hydev?
I had a great week working on some new janitor tech that makes it easy to thoroughly delete tags from a repository. I also cleaned a bunch of code and, for normal users, improved some quality of life. The release should be as normal tomorrow. >>15461 Please hit the 'Linux' tab here https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#installing and ctrl+f for that 'g_module' string. There are a couple of ways to fix this, I understand, but I am no Linux expert so I cannot talk too cleverly.


Forms
Delete
Report
Quick Reply