/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #3 Anonymous Board volunteer 12/01/2021 (Wed) 23:12:07 No. 16965
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
https://www.youtube.com/watch?v=mn34GKTVqMM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v464/Hydrus.Network.464.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v464/Hydrus.Network.464.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v464/Hydrus.Network.464.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v464/Hydrus.Network.464.-.Linux.-.Executable.tar.gz I had a good week. Some images get better colour, and the software now deletes files more neatly. ICC profiles tl;dr: Some image colours are better now. Some images come with an 'ICC profile'. A scan of one ICC-heavy client showed that 10% of all images had them. ICCs are basically colour correction data like 'actually, the blue should be this much more intensely blue than is stored' in a variety of sometimes complicated ways. Some nice cameras attach a fast ICC based on lighting conditions or known defects in the camera, and some image editing software does similar. ICCs are also used for broader and more complicated colour conversion tasks like in the migration from sRGB to HDR formats if you have a new tv/monitor. Hydrus now has basic ICC tech. It recognises when an image has an ICC and will apply it, converting the rendered colour to 'normal'. This happens right on image load, so it should apply in every media viewer and all new thumbnails. Should work for the server, too. With luck, this will be one of those things where you don't really notice any difference, but you should just have some nicer pictures, mostly photos, in your client. Let me know if you have any files that you know are still not rendering right! This work took a long time to get done, but a user helped me out with some great examples and more background info on how to handle it all properly. The next step will be to add more indication of when an image has an ICC and add the ability to flip it on and off. Another topic is 'display ICCs', where some artists and other enthusiasts will have an ICC calibrated for their particular monitor or for different display conditions (e.g. 'how would this look like under this lighting?' or '...from this printer?'). Since we have the tech now, I think I should be able to support this sort of ICC conversion too. better file delete tl;dr: Files delete from your disk better now. For a long time, actually physically deleting a file that leaves the client's trash has been a bit hacky. To reduce UI lag, the recycle or delete action has to be deferred to the future, but since there was no permanent record of what to delete, the deferred job was always a bit rushed and in a couple of instances (e.g. if you cleared trash and closed the client soon after) it could leave file orphans behind. The system has always been fail safe, never deleting too much, but it could lead to surplus cruft in file storage. Secondly, it turns out the server never got physical file delete tech added! The file repository has never been used much, and it just never came up. So, this week I overhauled the whole way it works. Now both client and server keep records of the files they should delete, print delete summaries to the log, use neater delete daemons that smooth out the work to be non-interrupting, have sophisticated 'is this an orphan now?' logic to ensure that edge cases are kept in storage until they are truly no longer needed, and correct themselves in odd cases (like if you re-import a file just a couple of seconds after deleting it from the trash). They are also more aware of repository update files, which is another thing neither have been good at clearing out in the past. So, you don't have to do anything here, but with luck doing things like 'clear trash' and other large forced physical deletes should just work a bit nicer now. Users who run a server may see it clear out a couple of files after booting, and in a few weeks I'd like to roll out a 'clear orphan files' server database action for the admin menu just like the client's so you can clear out legacy files from deleted services.
full list - image icc: - images with embedded icc colour metadata are now normalised (to sRGB) like the rest of media rendering in hydrus. ICC can often mean photos, where a nice camera will apply ICC data to compensate for camera defects or general lighting information, or it can mean normal digital images where the software attached extra colour data when it was saved - the image will now be rendered with 'fixed' colours in the media viewer, and new thumbnails should be good too. it applies early in image load and should work in all cases hereon, on both client and server - images with an ICC will take a little longer to initially load. I'd estimate 10-50ms extra for most. one user with many ICC images discovered 10% of their collection had an ICC. I don't think the delay will be terrible IRL, but see how you get on and let me know! maybe giganto patreon pngs will have a fresh surprise for us - future expansions here will be a database cache of ICC images and system:has icc, perhaps a button to click the ICC application on and off live in the media viewer, and then maybe options to load up and switch an ICC for your display - . - better physical file delete: - both client and server now physically delete files from storage more smoothly and reliably. the 'deferred file delete' list is now saved in the database itself and will survive reboots (and undo itself if a file is re-added before it can be deleted), and the physical delete daemons are able to work at a less spiky pace as a result. physical delete summaries are now logged as well - the server now physically deletes surplus files from its file storage! this never actually came up before jej--servers were just keeping all files forever - on update, all servers will scan to see which files it only has deletion records for and will queue them for a deferred delete - when deleting a service from the server, all its file repository files and/or general repository update files are now queued for deferred deletion if they are now orphaned - some advanced 'pending upload file delete' logical situations are now tidied up better, for instance if you have a file set to upload to a file repository or IPFS and then delete the file from the trash, the file will hang around until the upload is done and then it will be correctly scheduled for physically deletion. same for if you delete the file repository or clear all its pending. previously, this file would never delete and become an orphan - thumbnails for non-downloaded file repository files are now removed promptly from a client if a file repository deletes a file - . - misc: - fixed a typo error in last week's file filtering changes when doing wildcard tag searches in 'all known files' domain - fixed some bad namespace search optimisation also caused by last week's search updates that was making 'system:has x unnamespaced tags' search instead count all tags, not just unnamespaced (issue #1017) - fixed incorrect file type handling in thumbnail loading that was triggering a safe mode for gif file thumbs (which are actually jpeg/png), it should roughly double thumb load speed for gifs (and .ico too lol) - . - boring image stuff: - wrote some methods to check for and pull ICC profile bytes from an image with PIL - wrote ICC application in PIL on image load. we had figured out a way to do it with Qt, but this can happen right at the start of the rendering pipeline and will work for the server too - cleaned up some PIL/OpenCV image load and normalisation code - the decompression bomb check is now quicker for images with rotation - dequantization is now applied to PIL on all image load by default, it doesn't have to be invoked separately - some metadata parsing like 'get duration of gif frames' is now faster for images not in RGB or RGBA color - . - boring delete code cleanup: - wrote a heap of new 'is an orphan' filtering logic for client and server - wrote a daemon job for physical file deletion and plugged it into a new database queue for pending deferred file deletes - client physical file delete now works off the normal lightweight job scheduler, previously it had its own mainloop thread - optimised complex file domain file filtering a little - the 'clear orphan files' job in the client now uses the same updated orphan logic as the new physical delete code. it now won't clear out files in upload limbo - fixed an issue with re-storing a file in a server after one of its file repositories had previously deleted it. this never mattered previously, when files were never physically deleted, but now the code is brushed up to work properly - cleaned up some server db code, including the read command method lookup - moved client 'hash exists?' test down to the master definitions module next week Now we have basic ICC support, I want to charge ahead on 'has ICC' and 'pixel dupe' search for the database. I'd also like to find the time to work on some more tag search tech for multiple local file services, but we'll see.
(136.44 KB 929x1024 anonfilly - pleased.png)

>>16966 >video >cunts beating each other Refreshing.
Hey it's me again, the bad-sector-with-no-backups anon from last thread (https://8chan.moe/t/res/3626.html#6295). This isn't immediately related to hydrus, so I'll make it short. Sorry. First of all I couldn't cope after all with losing all sorting of my 2TB hydrus database of probably four years by now, so I was trying not to think about it. In its current state it's just 2TB (4.7 million files) of unsorted media with meaningless filenames. Between twitter and tumblr rips of reaction images, garbage, and crops, it was completely useless. This is without considering, of course, that the bad sector on my HDD means a random 23GB of that is corrupted, and conceivably overwritten, forever lost. Nothing I've ever (formerly) archived can be trusted to be complete again. I actually feel sick thinking about it. But, I did start archiving again. I checked the twitter of an artist I follow, and I saw him do a post delete after three days of it being up. I have yet to see it resurface anywhere. That feeling of possibly never seeing that media again in my lifetime made me start archiving again, despite my enormous loss. On top of that, my current cope (that I'm not gonna lie, isn't really helping) is that I'm archiving for other people, not myself. My decision-making clearly can't be trusted, if I thought not having backups was a fair reality. Again, on some level, I thought I could afford my hard drive failing. I think if I consciously were doing it in the hopes of sharing it one day, I would've been scared of losing it. But anyway, the actual reason I'm making this post is to say that, even though I knew I wouldn't have been able to cope with more bad news just affirming the hopelessness of my situation, I searched "hd tune pro error scan" (the "bad sector" check I did, which I was told I cancelled far too late for there to be any hope), and I found an (old) article (on a 3.50 version of the program, when the one I used was 5.50), seemingly citing an official manual stating that "this test only performs read operations and is nondestructive." Provided that's true, the only data that got overwritten was in my watching several hours of videos/streams, my using my browser otherwise, and my booting from the failing hard drive twice (actually three times, but I forgot to mention the third one in my earlier post). The situation is still just me grasping at straws. I don't want to hope anymore. I don't want to feel this loss anymore. I have come to terms with the reality I'm living, where I literally have nothing left besides an unsorted hoard that can never again be trusted to be complete. But unless the "error scan" thing I read was complete bullshit, in that it did actually write data, especially of that scale, to my dying hard drive. I will bother trying to grasp at straws for this hard drive eventually. Also the reason I hadn't replied sooner (besides my feeling sick at the thought of this having happened to me in the first place) is because I'd been veracrypt encrypting the 5TB external hard drive I'd cloned the dying hard drive to. Truth be told, my only clone is on that hard drive, so if the encryption process is interrupted, it'd be corrupted, and I'd have to clone it again. But I'd be able to do so without booting from it this time. So, I don't know. It's 66% of the way done, and says four days are left. Encryption doesn't have anything to do with anything. But I'm just saying, the earliest I can start trying to grasp at straws for my hard drive will probably be five days from now. Thanks everyone for the reality check and the recovery suggestions. Thanks for the empathy. Above all else, I only wish I were capable of making the decision to backup for myself, before I had lost anything.
Is there a way to automatically tag files downloaded with a parser with their original filenames?
>>16969 I read through the rest of your posts including your /g/ posts and basically all I can say is that you seem to be super-mega-retarded (repeatedly trying to boot from a dying HDD...?). Plus, frankly, I think you deserved it >>https://8chan.moe/hydrus/res/15850.html#16923 >I had so much shit that never even saw boorus. Deleted pages. Uncensored pixiv stuff that was later re-uploaded censored. If you weren't a selfish asshole most of what you lost would still probably be findable despite all your myriad mistakes. On top of this you seem to have not learned your lesson, saying >That feeling of possibly never seeing that media again in my lifetime made me start archiving again, despite my enormous loss. yet there is no mention of a backup solution in place. HDDs eventually die and in many cases there's no warning at all. Without a back up this will happen again and then you will *really* deserve it because you didn't learn your lesson the first time. >I'd been veracrypt encrypting the 5TB external hard drive I'd cloned the dying hard drive to Why would you *ever* do this? >Truth be told, my only clone is on that hard drive SUPER MEGA RETARDED
(1.74 MB 478x214 intruder alert.gif)

>>16971 >veracrypt >Why would you *ever* do this? I'm not the anon with the damaged drive but the reason to encrypt a disk is quite obvious: to keep nosy faggots and eventually adversarial glowies out of the cookie jar. By the way, encrypting a disk is a step in the right direction in order NOT to be retarded.
>>16972 Normally you'd be right but he's trying to recover stuff from a damaged file system and afaik encrypting a disk with veracrypt will re-write the file system. For example, if there were an image that is intact but the file allocation record is missing he would be probably be able to find it with a file recovery program but after this it will be gone since veracrypt won't have copied it into the new filesystem. On top of that, apparently he's doing this live/in place on his only copy which means he's learned exactly jack and shit from this experience.
(94.43 KB 1600x900 aefgsdf.jpg)

>>16973 That anon panicked. He deserves a break and a bit of guidance. Just a bit, not much, in case he gets comfy again. Just my two bits.
(8.76 KB 671x313 screenshot.png)

Is it ok that client.caches.db file size is 4.5 GiB? Is there a way to limit/reduce it? Should I do it?
>>16969 I dunno how I produced that link, but I mean to link this post: https://8chan.moe/hydrus/res/15850.html#16923 >>16971 >If you weren't a selfish asshole most of what you lost would still probably be findable despite all your myriad mistakes. Before anything else, I have to respond to this. The single biggest insight I gained was the decision-making required to backup, which I stated in my latest post: >On top of that, my current cope (that I'm not gonna lie, isn't really helping) is that I'm archiving for other people, not myself. My decision-making clearly can't be trusted, if I thought not having backups was a fair reality. Again, on some level, I thought I could afford my hard drive failing. I think if I consciously were doing it in the hopes of sharing it one day, I would've been scared of losing it. >Above all else, I only wish I were capable of making the decision to backup for myself, before I had lost anything. I wasn't trying to blogshit about myself, cause literally introspection for the sake of finding out why I couldn't make the decision to backup for my own reasons was all I could do when I thought about this having happened to me, but I literally came to the conclusion that porn doesn't make me happy, which was why I couldn't justify backing it up for myself, and why I on some level thought I could afford to lose it. Had my HDD with my private data been hit instead, I would have considered it less of a loss, because I was never going to share private data. But, with my hydrus HDD hit, it disables my being able to share this porn with confidence it's a complete collection, or if it remains unable to boot, it disables my ever being able to share it at all. Before my hydrus HDD had the bad sector, I would've been fine with every HDD I own spontaneously combusting. The literal only reason I'm backing up now is for the sake of being able to share it one day. I hadn't had that thought before. My only justification is the opposite of selfish. Again, I would have preferred if my private data HDD got hit instead, since I was never going to share that anyway, even though the private data is impossible to gain again. >basically all I can say is that you seem to be super-mega-retarded (repeatedly trying to boot from a dying HDD...?) I literally said the first thing I did since decrypting the bad sector HDD was clone it to an external hard drive, then tried to boot from as much, to perform any further actions from there, only to find the external hard drive wasn't recognized as a boot device, hence why I had to try booting the dying HDD again. I obviously regret not creating an image alongside that clone, since instead, now, the only image I have is after booting it three time, without including the first two boot attempts stalling indefinitely. Also I hadn't mentioned the following before, but I even lost the clone I created that I tried to boot from, so now I only have access to trying to perform recovery on an image created after booting the dying HDD. I can't rationally say any of this was ideal. But I can say it was before I came to the conclusion that I should be archiving for other people, since my decision-making can't be trusted if I were to do anything for myself. So I wasn't of sound mind during this, even though I was in my understanding trying my best. >yet there is no mention of a backup solution in place. I literally mentioned an external hard drive in the same post. That is the only backup I can use currently. But it will be a backup after I finish encrypting it. >>16973 >On top of that, apparently he's doing this live/in place on his only copy which means he's learned exactly jack and shit from this experience. How was I supposed to know veracrypt changes the properties of the files it encrypts. Also I started encrypting it before I searched that "hd tune pro" only read data, instead of writing data, so I was told it was too hopeless to even try recovery. And worst comes to worst I thought it wouldn't have meant much to make a new clone, without booting from it this time, which I am able to do due to having a functional replacement hard drive at this point. It's a sector-by-sector image of the dying HDD. I didn't know veracrypt would ruin it because it was an image of a hard drive with a bad sector. Anyway in case you didn't get what my main point was, it was that I literally can't try my best for myself. I was doing it for myself the entire time, and not once did I figure that I couldn't afford to lost it. But now I'm trying to do it for others. I've read a lot of no-backup horror stories, but all of them were about self loss, not the loss of being able to share the data with other people, so it didn't provide me with the insight I needed to feel I couldn't afford to lose it. If nothing else, I don't think it's fair of you to call me selfish. Also this is stupid but the only platform I know of to share hoards of porn is exhentai, which you're only allowed to have one account on, or else they ban all your accounts. I only have the one ehentai account from when I was in my teens, which has a retarded username everyone can backtrace. It did cross my mind to upload some galleries to exentai. But I thought just switching from one account to a new one would get both banned. The only time I ever saw someone request something I had was in an exhentai comment, where the images were all there, but the filenames were garbage. So there was no lost data, anyway. So I didn't bother. Again, I just don't think it's fair to call me selfish, since it was that I wasn't unprovoked sharing things, not that I consciously refused to, or consciously preferred I not.
>>16976 What an enormous wall of retarded cope. Fact of the matter is you're *right now at this very moment* doing potentially destructive operations on your only copy of your data and if you had shared all your rare shit somewhere (sad panda, furry booru, e621, anywhere) it would almost certainly still be there to re-download.
>>16977 ???? ok retard
>>16976 did you make a header backup of your encrypted volumn or are going to wait until your password doesnt work anymore
>>16979 ? I wasn't prompted to make a header backup. I think it produces one after it finishes encrypting. To say what happened again, both my original 2TB boot HDD (my hydrus database) + my second 1TB HDD (private data) were encrypted. Then my original boot HDD had the bad sector. I decrypted the original boot HDD (took 2-3 days), then did a lot of unfortunate shit that makes my image of that bad sector original boot HDD unfortunate. I'm currently booting from a replacement 2TB HDD. My second 1TB HDD (private data) is unchanged. But I have an external 5TB hard drive, that has my only image of the original 2TB bad sector boot HDD, and I'm currently encrypting that external 5TB hard drive. It's 74% completed, and says 3 days are left, which I assume means anywhere from three to four days left. I was now told that encrypting drives with images made from dying hard drives compromises the integrity of the image. I had no way to know that. I just thought that even if the encryption was cancelled and thus corrupted the image, I would be able to make one again, without actually booting from the dying hard drive this time. I am not saying anything I've done at any point during this is ideal. I don't know.
(34.32 KB 1213x630 kissu.png)

Since upgrading to 464, wildcard searches no longer return namespaced tags. Was this intentional? If so, is there a way to include namespaced tags?
>>16975 Try running a vacuum under database > db maintenance > review vacuum data. I only recently realized this was a thing and shaved several gigabytes off of my db sizes.
>>16980 the first 256 bytes of the volume is the header, all the hashkeys are stored there to hash your passwordd and decrypt the volume, if you lose it or it gets corrupted you will never be able to decrypt the volume even with the password I think vc makes a backup at the end with the last 256 bytes the volume but still, the ends of the volume are the most likely to fuck up so if you mess around with partitioning and start resizing things you could easily lose both header and backup
>>16983 The image of the dying hard drive was made after said dying hard drive was decryted. Macrium Reflect (the program I used to create the image) doesn't offer cloning/imaging veracrypt-encrypted hard drives. If it's possible to clone/image a veracrypt-encrypted hard drive, I didn't know of as much being possible whenever I had to upgrade my hydrus HDD. I always decrypted it, then cloned it to the bigger drive, then encrypted it again.
>>16966 I'm one of the dipshits using this from a network drive and it works okay except that it will close randomly if the disk if the NAS is under high load. I saw in the last thread you were saying you could run this with the databases on an SSD and the actual images on the network drive. Would it be possible to on start up just copy the executable and databases to the temp folder on my system drive then copy them back when I close hydrus?
>>16984 you dont need to decrypt it to copy it, you just need to use something that does byte level copying like dd, this way you get a 1:1 copy
>>16986 When I used the sector-by-sector mode of Macrium Reflect it didn't work, though. https://forum.macrium.com/46325/Reflect-and-VeraCrypt https://sourceforge.net/p/veracrypt/discussion/general/thread/75cd86e620/#011a/fbdd This was on Windows 7. Maybe there are other alternatives, but all the articles I read (this was several years ago) only recommended Macrium Reflect, which couldn't do it.
>>16987 I dont know anything about this, I know most programs use caching optimizations to copy data which will output garbage with "unreadable" data, encryption means no filesystem no metadata no eof no terminators etc. the only way to copy it is raw bit for bit copying read 1 write 1 read 0 write 0 read 0 write 0 read 1 write 1 etc. Im pretty sure there are hundreds of dd ports for windows, its literally <10 lines of assembly code to do raw bit copying
>>16988 I don't know anything about this topic, so, all I can say is thanks for clarifying that this program has a needless constraint. But I read a ton of articles looking for a program to clone a hard drive on Windows 7 a few years ago, and this one was the one they all pointed to, for whatever reason, even though it can't clone veracrypt-encrypted hard drives. I wish I knew I had better options before. But, I wish I had the mindset required to have done a lot of things differently. At least the future will be easier.
>>16988 >the only way to copy it is raw bit for bit copying >raw >dd command It might work. >>16989 Talking about RAW disk lectures, check the following out, perhaps... https://www.hddguru.com/software/HDD-Raw-Copy-Tool/
>>16970 If you mean the filename a server can optionally attach (which happens in the http headers), unfortunately not. If the filename is available in the html page or JSON API being parsed, you can grab it though. My watcher parsers do this, although I generally recommend people not grab "filename:" tags since they are generally not high quality and very rarely useful for searching. >>16975 Yeah, that is normal. I use the word 'cache' here somewhat wrongly--that file is actually full of pre-calculated permanent lookups that massively speed up search. You can expect it to be about 5-40% of your client.mappings.db size, depending on how many files you have. Don't delete it, or it'll just spend hours recalculating the next time you boot. >>16969 Glad you are moving forward despite the shit. This is sort of related, sort of not, but as I developed hydrus, one of the things I had to figure out in my head was that I couldn't make a system with 'perfect' tags. I had all these ideas about letting people search '> breasts:c cup' and other qualitatively informed namespaces, but instead I discovered that the faster you pull tags with parsing, the more human error you'll run across. Then, with enough people and tags, there are fundamentally unresolvable disagreements on which tags are the best anyway. The PTR now is a giant mess of contradictions and errors, but it is also of huge and unique value. It took me a long time to be happy with an imperfect result, but that's a better description of reality than what I originally meant to build. Life has pain and mistakes and imperfection. We'll never defeat that. The way to win is to thrive anyway.
>>16981 Thank you for this report. Someone else mentioned this today as well. I """fixed""" something in this system last week, and I may have fucked up namespaces along the way. I am sorry for the trouble, I will make sure to sort this out this week and get some proper wildcard unit tests so this stuff doesn't slip through again. >>16985 Thanks, this is useful information. As for your question, yeah, I think you could probably do this. A couple of batch scripts, to move/copy back and forth before you boot and after you exit, would do it. I wouldn't use the temp folder--it is a little volatile--unless that is specifically a clever ramdisk or something for your machine. I won't write this into hydrus boot code since it is a little niche, but I'd be open to adding a 'run this script/exe/bat on exit', if that would help whatever situation you set up here. If you want to normally store the database on the network so you can run it from multiple machines, I think your only concern here should be that all the machines see the same address for the media/thumbnail file folders, just so it seems to be the same path wherever the database boots.
Dev, will CBZ files ever be able to be displayed in Hydrus? If so, is there an ETA?
So what exactly is the PTR? Is it just tags linked to an image hash or can you get actual images from it? Not 100% clear from reading this the hydrus network access keys page.
>>16969 In future, please format new disk as soon as you get it with veracrypt, open a container and put your shit there. Do not do in-place encryption or decryption, it is probably not reliable and takes way longer. You also don't need to "decrypt" anything, ever. An image of a disk is just a bit-for-bit copy of your drive. If you have copied a partition, not the entire disk, you should just be able to open it in veracrypt. If veracrypt can't handle full disk clones, there are easy workarounds. I don't know if windows even lets you copy a partition/drive, since I don't use it. If it does not, I can provide help with GNU/Linux. Just to re-iterate (it has been a week or two): - You have an original disk, which now has bad sectors and all sorts of file system corruption. - You booted the ORIGINAL disk and watched youtube for a couple of hours - You "decrypted" the bad sector disk, using the veracrypt "decrypt in-place" feature. This made veracrypt rewrite all files onto the hard drive in unencrypted form, basically re-writing the entire disk (the opposite of "Do not touch the original drive without having an image first"), but with the files in plain. - You then made a clone (NOT an image) of the original disk to another disk, this copied disk failed to boot when you plugged it in. - This cloned disk, you are now encrypting with veracrypt (your ONLY copy of the failing disk) If you really are trying to do this, you will get two new drives (that will serve as your backup drives, so you need them anyway) and image both the original disk and the new disk onto them. If you need an OS-drive, get an old HDD or something and use that for the time being. Copy the drives with dd_rescue, it is like dd, but with failing disks in mind, so broken bits will be retried later. If you have both images on BOTH drives (original drive and cloned drives, on BOTH of your new HDDs), you can try recovery. Let veracrypt finish it's thing for now, just don't do shit like that ever again to your ONLY copy of a failing disk. If this were somehow an issue (it may very well be), you already overwrote way too much data for it to be possibly be recovered, so just double down and make sure the veracrypt encrypted disk is usable somehow. So, in summary; - You run to a store and get two new drives, both of which must be bigger than the failing and cloned drive COMBINED. If you don't have an old HDD lying around, get the smallest and cheapest SSD/HDD they have as well. - You fork over the cash, go home, image both the failing and cloned drive to one of the new disks. - When that is done, you copy the image from new drive #1 to new drive #2. Unplug both your old drives before starting to copy. - After all is done, you unplug all but your (new) OS-drive and new drive #2, physically, and try the suggestions people here gave you. - When you did something and it broke your image on new drive #2, you copy it from new drive #1. When I say "image", I mean zeros and ones on the source drive, in a FILE on the destination drive, just like a movie, only 2TB big. Alternatively, take the loss, it could have ended WAY worse than 32GB and metadata being lost here.
>>16995 >- You "decrypted" the bad sector disk, using the veracrypt "decrypt in-place" feature. This made veracrypt rewrite all files onto the hard drive in unencrypted form, basically re-writing the entire disk (the opposite of "Do not touch the original drive without having an image first"), but with the files in plain. I didn't know that this happened. This was the single reason I couldn't just instantly copy all the data over, and why every other mistake following the bad sector even happened. I was trying not to hope. But, I feel the same horrible feeling I felt before, of realizing it's actually hopeless.
>>16996 Why did you have to decrypt the entire disk, you could just mount the veracrypt container and copy all files? Or were some files missing in your veracrypt container, you panicked and started to decrypt the disk before cloning it? Please describe what happened in chronological order. What is the oldest possible state you have of any of the files involved? If my assumption is correct, what you should have done, is mount the disk and clone the mounted image from veracrypt. That will give you the "decrypted" version of the disk, which is what veracrypt is protecting. For additional safety, you should have mounted the disk read only in veracrypt, ensuring nothing more is written to it. On the original disk, you now have the "decrypted" version of your data, which is what you cloned to the other disk? And after recovering on the cloned disk, 32GB of your data is missing? If that is the case, sorry, that's likely the best case scenario. If you are buying some new hard drives (which you should!), please image the original disk again, so you have a clean copy to experiment on, and retry to restore everything. This time, maybe try dd_rescue (READ THE MANUAL AND USE A MAPFILE). If it goes exactly the same, the damage was done while veracrypt was decrypting the drive. Please don't put much hope into this, my guess is that the damage was done before or during decryption - gargabe in, garbage out. But if you have some new drives anyway, it can't hurt to try. Plus, you can run photorec, maybe you can restore a couple more media files from the image that you can import into hydrus. On encryption: Due to the way most large capacity hdds are designed, encrypting "on the fly" does not actually protect you from someone seeing the old unencrypted files. To do that, you absolutely need to fill the drive with random data first (veracrypt lets you do that in setup). Don't throw away/sell these disks without overwriting them or issuing an ATA secure erase command. Also, I think I remember you saying you have an old backup. So, between "only" losing 32GB + metadata, ability to (probably) restore with photorec and still having some old files that you can match with the current ones, you may actually have lost little data - your old hydrus db may actually contain a lot of subs...
(74.36 KB 1760x775 Untitled.png)

>>16997 Sorry for the late reply. I can never tell how late I'm replying cause 8chan timestamps don't auto convert to my time zone, but I could've replied sooner had I not just felt sick about reading the exerpt of your earlier post I greentexted. I haven't even eaten today, but I couldn't even make myself eat. I know this is all my fault, but to me the blame is entirely on my mindset for my not being afraid of losing the data, and of the poor, decision-making after that point. I just hate that I was ever vulnerable. I don't know. >Why did you have to decrypt the entire disk, you could just mount the veracrypt container and copy all files? Or were some files missing in your veracrypt container, you panicked and started to decrypt the disk before cloning it? Please describe what happened in chronological order. What is the oldest possible state you have of any of the files involved? Well, as you mentioned later in the post: >Also, I think I remember you saying you have an old backup. So, between "only" losing 32GB + metadata, ability to (probably) restore with photorec and still having some old files that you can match with the current ones, you may actually have lost little data - your old hydrus db may actually contain a lot of subs... Yes, I did mention in a previous thread that I might have a 500GB HDD, which would have been a several year old version of the boot drive (that had the bad sector), since after filling the 500GB HDD, I cloned it to a larger drive, and booted from the larger drive instead. But, I think I might not have the 500GB HDD anymore, since I started using my old HDDs as a secondary HDD. So provided it's not just formatted clean, it must only have the contents of my second HDD, which is where I put all my private data. If I have any hard drive with my boot HDD (where my hydrus was stored), it must be the 256GB or so HDD the laptop came with. But I distinctly remember using the 500GB HDD as a secondary HDD. I don't think there's any chance for it being a backup. But otherwise, the only reason I decrypted the disk was because the program I used to clone/image ("Macrium Reflect") didn't work for veracrypt-encryption. Even in the past when I was using it to change my boot drive to a bigger hard drive, I had decrypted my boot drive first. I didn't know I had any other options, even though I had last checked several years ago. Even though I can't really cope with confronting the reality being that myself is the blame, to be fair, those several years ago when I searched how to clone/image, none of the articles I read that recommended the program I used mentioned veracrypt encryption; I viewed relatively few sources that actually mentioned the program and veracrypt in the same breath. I didn't understand that corrupted data means the operating system considers it to be free space that can freely be overwitten by writing data. I know it seems obvious to assume as much, but I thought when I pressed "print screen", opened mspaint, pasted the fullpage screencap on my clipboard, and was unable to save it due to the fileystem being corrupt, that it means the operating system refused to overwrite the corrupt data. I don't know what I was thinking. I obviously would have only walked in a straight line of preserving my data had I had the insight I had now. But another reason I decrypted the bad sector boot HDD (where my hydrus was stored) was because I was afraid if I shut it down, it wouldn't be able to be mounted again, because of the bad sector. Also the way you phrased it, of using the word "copy" at first (even though you later said "clone" in the same paragraph), it makes it sound like I could just copy my hydrus folder as soon as the corruption happened, and then perform recovery on the isolated copy. I thought I needed every sector of the dying HDD to perform recovery on, since anything corrupted started appearing as empty space to the OS. Also, because it was my boot HDD, I couldn't mount it unless I booted from it. My second HDD (with my private data [the HDD is still healthy]) actually came with Windows 10 installed, and at first I encrypted it by booting from it. But I couldn't mount it unless I booted from it. Unless veracrypt changed to support this retroactively- or, sorry. Maybe even if I had to boot from my bad sector boot HDD to mount it, if I were able to perform a sector-by-sector clone of it before ever shutting it down (which the program I used ["Macrium Reflect"] was unable to do), I might've been able to clone it to a replacement HDD, then boot from that, and maybe it wouldn't have had any negative effects. I don't know. In the end, I just wish I hadn't been vulnerable. But being educated on how to properly compensate for the situation only helps. >On the original disk, you now have the "decrypted" version of your data, which is what you cloned to the other disk? >And after recovering on the cloned disk, 32GB of your data is missing? Yes, but the "recovery" wasn't a "recovery" at all, but was me performing Windows 7 "chkdsk" on the clone, to use the clone as a replacement boot hard drive. I'm sure the only thing it "recovered" was the Windows operating system. >Please don't put much hope into this, my guess is that the damage was done before or during decryption - gargabe in, garbage out. But if you have some new drives anyway, it can't hurt to try. I keep trying not to phrase it as that one onion article; "Man Who Thought He'd Lost All Hope Loses Last Additional Bit Of Hope He Didn't Even Know He Still Had". But, it keeps happening to me. Every time I thought the feeling was over, it hits me again when I read something new. I only even had the motivation to try recovery because I thought I found that the "bad sector" check I did with "HD Tune Pro" didn't actually write any data, so the "only" heavy writing done was from my watching hours of videos/streams, and booting from the bad sector HDD three times. But, upon learning decrypting it wrote data, I feel like there's no hope in even trying anything anymore. One thing I think I can amend about what happened in your previous post (which admittedly, I only skimmed reading after learning that decrypting my drive wrote data, since I felt like laying down and dying), was this part: >- You then made a clone (NOT an image) of the original disk to another disk, this copied disk failed to boot when you plugged it in. >- This cloned disk, you are now encrypting with veracrypt (your ONLY copy of the failing disk) After decrypting my 2TB bad sector boot HDD (with my hydrus database), I cloned to an external 5TB HDD. But when I tried booting from the external 5TB HDD, it wasn't recognized as a boot device (not that it even tried, but the OS failed to boot, or anything). So this was when I booted from the same 2TB bad sector boot HDD, three times in total, at which point I created an image of it, which is currently on the external 5TB HDD, which is currently being encrypted in place (92.3% done, with 25 hours left). Thank you for the patient replies. Thank you for the insight. Thank you for being real with me. Even though I can't stomach this being my reality, I'm taking note of all the advice, information, and suggestions I've been given. I wish I had never been vulnerable enough to think not having backups was a fair reality. I wish I had never continued making mistakes to my bad sector HDD after the fact. But thanks for helping me properly compensate for all the context involved in this.
>>16998 >After decrypting my 2TB bad sector boot HDD (with my hydrus database), I cloned to an external 5TB HDD. Also I lost this clone, because, even though it's retarded, I didn't understand why I couldn't expand the partition to fill the entire 5TB HDD. Right now, that external 5TB HDD only has the image I made after booting from the bad sector HDD three times. I eventually learned that there is a difference between like "NTFS" or something, and that the partition couldn't expand past 2TB because of the type it was. I didn't have to delete this clone of it; I could have created two new partitions, and used those. I can only say that I was vulnerable, and thus incapable of operating in my own best interest. I didn't purposely fail myself. Even though every step I took was one self-destructive mistake after another. I just wish I hadn't been vulnerable.
>>16999 Anon, I think you misunderstood quite a lot of stuff regarding hard disks from reading your last two posts. This is clearly a bit too much information for you, so I'll try to condense it down. However, I don't have time right now. I'll try to explain tomorrow, but please, do not touch anything regarding those disks and especially the image of the original boot sector HDD. A lot of assumptions of you are wrong, an OS does NOT overwrite free space at random. You overwriting your hard disk may also be less problematic than you may think at first. I really hate to give you false hope here, but don't be so hard on yourself. There is no way an average computer user has the background knowledge for any of this, especially since windows is hiding a lot of the complexity from you. Stop blaming yourself for not knowing what you couldn't have known. Recovering corrupted file systems is a minefield and any wrong move will blow disastrously in your face if not planned for properly. There is a reason why professionals in this area make shitloads of cash. After the encryption is done, please don't try any more experiments before someone can explain to you exactly what went wrong and how to fix it. I urge you, do NOT do any writing to this image - no chkdsk, no hex editor, no ntfsfix, nothing! Copy the image of the original boot drive somewhere safe, in addition to the 5TB external HDD. Again, for any functional backup, you need at least two other hard drives of the same size. You may as well bite the bullet and buy them right now.
>>17000 Thanks. I won't fuck with the only image I created, of course, nor the original bad sector HDD. But I only have a laptop, where I use the CD drive as a secondary hard drive slot. But I won't fuck with all I have left. And, I'm trying not to hope. Even though any new information makes me feel like death, I'm not purposely trying to make myself vulnerable like that.
Bug: In the duplicate filter, "go back a pair" button only goes back up to the first decision you made. You can't go back before that if you skipped a few pairs first: If you open the filter and click skip forward a few times the back button will do nothing.
>>17001 All right anon, I'm back. Here is what I want you to do, in order for you to get a good image of your drive. I know you have a copy already, but having a second image that was created with tools most people are familiar with can't hurt. First, get cygwin from here: https://www.cygwin.com/ Download the setup.exe, hit next a couple of times, until you get to the package selection screen. There, on the top, you change the dropdown to "Full" and in the search box, you type "ddrescue". In the field "New", you pick the latest version available. Then, you hit next again, until everything is installed. Next, you get a fresh drive, that has nothing on it and connect it. Format it and give it a drive letter, so you can access it in explorer. Prepare everything for the dying drive to be connected. Make sure the computer can run like this for a couple of days. Charger connected, everything ready? Open "cygwin Terminal" *AS ADMINISTRATOR*. Connect your good drive. Open explorer and make a note of it's drive letter. It will be important later. Connect your bad sector drive now. If windows prompts any kind of repair, cancel. Also try to avoid looking at the files on the drive as much as possible. Note down it's drive letter. In the terminal, you type cat /proc/partitions This should give you a list of all the drives connected to your computer, and it should tell you the corresponding windows drive letters. In the column "name", your drive names should be listed. For example, sda1 means first partition of first drive. sda5 would be first drive fifth partition, sdc1 would be the first partition of the third drive. Type cd /cygdrive/<your destination drive letter here>. For example, if my new and good drive would be E:, I would type cd /cygdrive/e/ Now, please type touch testfile. Open the destination drive in explorer and verify that there is indeed an empty file called "testfile" on the drive that should later contain the image. If that worked, you are almost ready to begin the imaging process. What we are missing now is the name of the defective drive. Look at the output of /proc/partitions again, your old drive should show up as sd<some letter><some number>. Now, you just need to ignore the number at the end, so you copy the entire drive, not just a partition. In my example, that would be sdb. When you double checked you have the right drive, and are in the right directory (see the yellow text when I touch the testfile?), just type the following: ddrescue /dev/<your-drive-without-number-here> drive.img mapfile In my example, the old drive was called /dev/sdb, but that may be different on your system. Slowly get up and do something else until everything is finished. While the entire process is resumable, please do not tempt fate by re-adjusting the laptop or drive during the procedure. When everything is finished, just copy the file called drive.img somewhere safe. ddrescue is very focused on getting as much data off your drive as possible, as you can see in the screenshot. Please don't be concerned about the number of read errors rising. ddrescue will *retry* a couple of times after everything else has been pulled off the drive. Don't worry about recovering any of the data just yet, for now, we need to focus on getting our image. If you have any questions, please ask before doing any of this. Also, if you think about deleting the old image from you 5TB drive, don't. Get a new one. You need it for backup. It's either now or in two weeks when you think about a good backup strategy.
>>17003 inb4 I wiped my system triple check you output path
>>17003 Ok, thanks. I will do it. But, about the new external hard drive I will get, will it only be for the new image created using this new standardized program? Does it have to be larger than just being another 2TB drive? Also in your previous post you said: >Again, for any functional backup, you need at least two other hard drives of the same size. You may as well bite the bullet and buy them right now. Meaning I would need at least two more 2TB externals? So three in total when you include the 5TB external I have right now? What am I even going to do with all this external storage. I get that with my current 5TB external, if I were to keep my image created with the non-standardized program, even if it were possible to create this standardized-program image alongside it, I wouldn't have the space required to put it on anything to perform backup on that. So I at least need one more 2TB external. But I don't want to blindly commit to buying two more for no reason. But I can buy one more 2TB external, at least. Would another 5TB external be the same as two 2TB externals?
>>17003 Also my 5TB external hard drive just finished encrypting. It sounds like the image I created using the non-standardized program isn't impressive, though, but you still want me to keep it. And you said I should get a different, empty external to create a new image using the standardized program. So there is nothing I should do until I get at least one more 2TB external?
>>17005 You could go ahead and just buy two 3TB (you need a few bits more than 2TB unfortunately) disks, but that will not cover your needs in the future. How big your new backups drive are should depend on your needs. Think about the total amount of data you have right now and will have in the next couple of years. You then get two external drives of that size. This should include everything you reasonably want to back up, not just your computer. Phones, email accounts, computers, personal projects, your password manager database, a backup of all important key material (header backups of encrypted volumes, ssh private keys, a way to recover encrypted volumes in case you forget your password, ...). All of that stuff goes on one backup hard drive. And then another one. For example, say you have the 5TB disk and 2TB, makes 7TB. So, you may go and buy two 8 TB drives to store all your files on, twice. If you say that you will actually download 1TB per year, two 10TB or 12TB drives may seem worth it. This will depend on your budget and data, of course. How exactly you take your backup will depend heavily on the data you store. What is important is that these drives stay independent from each other. If you main drive dies, and the first backup drive also dies mid restore, you still have a second backup drive. If your laptop takes a hit due to lightning striking tomorrow, while you are making a backup on backup drive #1, you still have drive #2, which was not plugged in at the time. This stuff can happen, it's better to be safe than sorry. You shall never plug both of these drives in at the same time. Best case, you use different software for both of these drives, so that if one software does not work anymore, you can fall back on the other one. You can rotate these drives every time you take a backup, so in the worst case, you have lost one iteration. This may still be quite a heavy hit, but not as bad as the alternative. I also recommend that once a year, you unplug ALL disks and try to restore your current system using one or both of your backups. That is the only way to ensure that your backups are working properly. There is another reason why having double your capacity available, and that is that you can do imaging on the disks. As you can see right now, having a disk that is a higher capacity than all the others is quite handy if you quickly need to image a dying drive. Most backup solutions will include deduplication and compression, so your backup will probably be smaller than the actual data. This extra amount of free space gives you a lot of headroom to image a drive in an emergency situation. You should have two drives that are exclusively for backup, there should not be anything not already found on any other disk. I would also not advise you to split your local disks into "this has a backup, this does not". There will be data on one drive that you wish you could have backed up, but didn't. Where I work, there is a strict (but also secret - we don't tell people) policy that all data is mirrored and backed up daily for at least 4 weeks, no exceptions. This has saved our asses more than once. Somewhere, someone is figuring out that some critical part of their files has not been properly backed up - don't let that happen to you again. Also, please don't buy more than two external drives. At that point, get a NAS and read up on RAID and advanced file systems like ZFS. That is also the point where it gets way more expensive. Please keep in mind, that is my inner sysadmin talking. I have a quite different setup, but it is also way more expensive. I also deal with a lot more data than you (there are no hard drives big enough to hold it all individually), so I would please ask the other anons in this thread to proofread my suggestion here. Also, I would encourage you to post your planned backup strategy before you go and buy hard drives, just to make sure there were no misunderstandings. >>17006 That is correct, you have an image using nonstandard software, which is way better than no image at all. Do not touch that image until you are 100% confident that all your files have been completely restored. You need another 2(.0001) TB of storage to image the drive and save a copy of that image, twice, so you may as well get two backup drives while you're at it. Check out https://shucks.top/ - it's a list of external drives you can buy, together with prices. The disks there can also be removed from their enclosure, giving you a cheaper internal drive - if you don't care about warranty. >>17004 He is completely right. Triple check what you typed, if you are not completely sure, ask!
>>17007 Ok so first of all, I only have a laptop. It's a W530, which I was told has a lot of options for storage, but the biggest HDD I can fit is only 2TB. Beyond that, I need SSDs to have more storage on a single storage drive. That was the main reason this disaster happened in the first place. I was hurting for storage all year, but I could always delete garbage twitter videos, move my private data to my second hard drive, etc., I've been lucky to have had more than 30 gigs free on my boot drive all year. Obviously in hindsight I wish I had just bought an SSD with more than 2B storage, to replace my boot drive that had the bad sector in 16 months, since at this point, I am spending that much anyway on externals and a replacement drive and all this bullshit. But I say this to say that my setup wasn't tenable, anyway. Ignoring the fact that I would consider myself being needlessly emotionally vulnerable if I had hope of my hydrus ever booting again, let alone ever preserving any information of any of my database, at best I could have archived maybe 40 gigs more before I literally could not delete anything without losing what I wanted to archive. I don't know what I'm saying. It doesn't matter, I guess. I wasn't thinking for the future as much as the present, which I guess was obvious since I lost everything and it hurts to believe there's any hope. I don't know if I can entertain the thought of my hoard ever reaching 8 gigs or whatever. I'll just try to do what I can for my corrupted database. I understand your point about having a backup of a backup. But I don't know if I can justify that myself. Obviously it would only help. But, I don't know, the more I'm having to spend on recovering what (for my own mental health) I'm assuming is hopeless anyway, the more I feel I should've just bought the SSD bigger than 2TB, to upgrade the 2TB bad sector boot drive too. I know it's not like being vulnerable in practice today doesn't open me up to more loss. But to me this is so fucked up. As is I literally can't do anything with my unsorted hoard besides filter to video and salvage anything above a certain filesize, since even though virtually everything was a gallery rip, twitter/tumblr means half of the database is garbage. It doesn't make me feel as sick spending so much on backing up an unsorted hoard that's 50% garbage, compared to the feeling of my hydrus database becoming this. I didn't even mention (or remember until now) that I lost a random 20 gigs of the data. It's so fucked up. I'm just saying words at this point. But I don't know if I can do the double backup thing, for this. I can do one more 3TB external, to keep my non-standardized image, on top of this new standardized image I would make, while still having space to put it on something, to perform recovery on. But even this, I feel like I bought this 5TB external for no reason, I feel like my non-standardized image is just burden I have to pay for. I'm really struggling to put into words how I feel. I wish this didn't happen at all, of course. But I almost wish I just lost everything. I don't think I will ever in my lifetime be able to do anything with my unsorted hoard, provided it stays this way forever. I don't know what to do. I can justify at least a 3TB external. I will try to get there, at least.
>>16992 I'm the only one with access to the NAS and will only ever be running one copy it's just that I only have the storage for this on my NAS. The NAS is all HDD and has a 1gbps ethernet connection. I don't have anything fancy like a ram drive on my workstation just an SSD that windows is on. I suggested the temp folder because that should be present in every OS and in general you would expect the OS to be installed on a SSD. The only think I think you'd have to worry about would be to check for adequate space before trying to copy the DBs and some sort of integrity checking to make sure everything made it there/back intact (boomer it up with some blank files to keep track of transfer state + rysnc? there must be a python version of rsync right?). If you wanted to go crazy I'd be sick if you could include some exception catching (and some retry attempts on failed access of a thumbnail or file to prevent crashes on network timeouts. The reason I ask it simply because I really don't like the idea of having my hydrus stuff spread out on multiple machines and would like everything to just be in one portable folder on my NAS. Of course I'd also like the performance of a SSD so I could sync the PTR and stuff. A batch file or executable next to the main client.exe called "client_on_a_NAS.exe" or something would be great.
>>17008 > It's a W530, which I was told has a lot of options for storage, but the biggest HDD I can fit is only 2TB. Beyond that, I need SSDs to have more storage on a single storage drive. bruh, if you're going to archive several TB of anything you need a desktop and/or a NAS. Trying to carry everything around in your laptop even if it has multiple 2.5" bays is incredibly stupid. What if your laptop just... got lost or stolen? >I understand your point about having a backup of a backup. But I don't know if I can justify that myself. kek, already talking himself out of getting a backup. What a waste of space and everyone's time. >I should've just bought the SSD bigger than 2TB SSDs fail too you nigger
>>17007 >>17010 >What if your laptop just... got lost or stolen? Well that's what the external backups are for. >>I understand your point about having a backup of a backup. But I don't know if I can justify that myself. >kek, already talking himself out of getting a backup. What a waste of space and everyone's time. You literally quoted me saying "backup of a backup", but your illiterate flinging insults at me by your own morals cannot survive when confronted with the actual fucking quote, so you false equate what I said to merely being "backup". Fuck off. There is a difference between broaching the topic of an uncomfortable reality, justifying a certain viewpoint, and just out of context flinging shit. >>I should've just bought the SSD bigger than 2TB >SSDs fail too you nigger The difference is the SSD would have been new, so short of catastrophic failure would've lasted longer than the less than a year I was constantly deleting things and moving things out to gain more storage space. It also would've been bigger, so a bad sector would have had less of a chance of corrupting actual data instead of just empty space. Then because I was using my old hard drives as a secondary hard drive, when the 2TB had drive gets it bad sector, I would've had the old 1TB secondary hard drive as a backup. It would still be shit not having backups. But I would've had some recourse. And my boot drive not being the one with the bad sector would have prevented all the self-destructive decision making I did (besides decrypting the drive, which I probably would have done before learning that writes data). Anyway, in the end, the main thing is that I already have a 5TB external with my non-standardized image of my 2TB bad sector HDD, but I am being told to buy at least two other 3TB externals, since to create a standardized image of my 2TB bad sector HDD, I need at least a slightly bigger than 2TB destination, and an empty destination drive. But I am being told one of those externals would be a backup of a backup. I don't even want to keep typing on this topic. It's hard to keep spending for what I need assume for my mental health will forever be an unsorted hoard that suffered 20GB of random data being forever lost. But I said I can try to get at least one 3TB external.
>>17007 >Check out https://shucks.top/ - it's a list of external drives you can buy, together with prices. Also thanks for this specifically, because I've been feeling the price of all this, and when I checked the exact same model of the 5TB external I'd bought a few weeks ago, the same one from the same store is now 15 dollars off. Not super impressed by it.
>>16994 The first. Hash matches, you get tags. Contains no images.
I had a good week. I fixed a variety of small bugs, added some quality of life options, and made several behind the scenes database improvements that will improve search now and in the future. The release should be as normal tomorrow.
>>17008 Anon, I think you don't understand what I am saying. I don't want you to buy two 3TB externals, I want you to buy two big hard drives you can back all your stuff up onto, for your future backups. I'm talking 10TB plus - they are not way more expensive than the 8TB models at this point. You can then image your old boot drive onto these new backup drives (the image is just a really big file), next to your stuff, since you will have bought a big drive that will last you a couple of years (capacity wise). If you want to archive things, you should be able to point a gun at any storage medium (or equipment actually) you have, shoot it, and still have all the data from that drive, provided you have enough money to replace that device. Especially since you don't seem to take losing your stash well at all, you can look at it as an investment. Having the confidence that any storage medium can die at any moment and it will only hurt you financially, feels really good. There really are no shortcuts here. Either you have a functioning backup, or you will learn this hard lesson again, until you do. The alternative is that you stop caring about your data. I guess it really comes down to that. >>17011 What makes you think that the SSD won't catastrophically fail 3 months in? I had an NVMe-drive with a bad controller that flipped a bit around every 100MB read, that was not even 4 months old. I had it in mirror with another drive, so I was not affected too much by it (only had to send it back), but this stuff can happen. Also, just fyi, plausible deniability and SSDs cancel each other. You can either have a secure veracrypt volume and your ssd dies after a couple of years (my one year old drive is now at 55% used), or you let veracrypt do TRIM-operations and any professional can tell that there is encrypted data on a drive (they can't tell WHAT, but they can tell that it's not random) There is a section about this in the veracrypt manual, I suggest you read that. I would also encourage you to get a desktop, since there are way bigger hard drives available for those and you can do RAID in them, which will eliminate any worries, since you can just pop a new drive in if the old one fails. This is an entirely other topic however.
>>17015 >you can do RAID in them, which will eliminate any worries, RAID is not a backup
>>17016 Yes, very sorry, let me rephrase: If you have RAID1/5/6/10/... and a disk fails, you do not lose any data. You have to replace the disk and it will rebuild it so that you can lose another disk. It will take some time to rebuild the array, during which all disks in the array are at 100% utilization, which makes them very vulnerable to also fail. For example, RAID5 (which is NOT recommended!), you have 4 disks, can use the capacity of 3, and of those four, any ONE can die. You lose two, all of your data is gone. When the array is rebuilt, you can then again lose one drive. If you CTRL+A and delete every file, all those files will get deleted on all those drives, which makes it unsuitable for backup without additional protection.
>>17017 So RAID1 then a third offline backup? RAID6 sounds kind of confusing. Being able to rebuild an entire drive without needing to use an entire drive of data sounds like voodoo magic.
I recently updated my system, and now MPV doesn't work in Hydrus at all. The error mentions something about libgio and libmpv, so I'm guessing that my system is now running versions that are too high for Hydrus. Is there anything that can be done about this?
>>17018 RAID1 is writing the same stuff to all drives. So one goes up in flames, it has virtually all the same information as the other remaining ones. RAID5 and RAID6 allow for one or two drives of the array to fail, but they cost the capacity of one or two drives, respectively. For example, you have a RAID6 array with 4 drives, you can only use the capacity of 2 drives, and lose 2 drives. If those drives were 2TB drives, you would have one "virtual drive" with 4TB, backed by 4 drives "in reality". RAID5 with 4 drives will give you the capacity of 3 drives, but you can only lose one. You lose two, everything, on all 4 drives, is gone. The nice thing about RAID is scaling - if you have 10 drives, you can use the capacity of 8, while reserving on or two drive capacities for parity data. If a drive stops working, simply stick in a new one and after rebuilding it is all safe again. When you replace a broken drive, the data from the lost drive will need to be recalculated with the data of the remaining drives. You can read on wikipedia how exactly this is done, as this differs between RAID levels: https://en.wikipedia.org/wiki/Standard_RAID_levels RAID is not a backup because you can still delete all files on this RAID array, and you will have nothing left, because RAID just distributed the deletion to all drives. RAID drives are also not supposed to be disconnected, because if you reconnect it, the entire drive has to be checked and potentially rewritten to ensure consistency with the remainining drives. Additionally, a RAID array is almost always connected to the same power supply and storage controller. Good luck recovering a 20-drive raid array when the faulty controller was writing garbage to all 20 of them. Advanced filesystems like ZFS also mitigate the impact of bitflips, since a RAID would traditionally have no idea which one is correct. If disk A says "the data is 01001" and disk B says "the data is 01011", RAID has no idea which one of the disks is correct. ZFS has a checksum in place and can simply try out which of the two disks was correct, rewrite the wrong disk and notify the user that a drive just returned garbage. ZFS can also do RAID with some neat tricks, like telling you which files were corrupted and only rewriting the actually used part of the disk during rebuilds. So yes, while RAID may limit the impact of single lost drives, it should not be viewed as a backup solution, it almost certainly is not. And all of this does not apply to RAID-0, since that is just "write this part to one drive, write the other part to the other drive". If one fails, both are useless.
>>17019 If you've upgraded to Fedora 35, I wrote >>>/hydrus/16937 which hopefully can help you. If not, the solution is probably very close, so you'll have to figure out where all the .so come from, but I'd guess copying them would work the same way.
https://www.youtube.com/watch?v=QgNEYyOlST0 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v465/Hydrus.Network.465.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v465/Hydrus.Network.465.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v465/Hydrus.Network.465.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v465/Hydrus.Network.465.-.Linux.-.Executable.tar.gz I had a good week. I focused on background improvements to the database. If you have a big client, it will take several minutes to update the database. My 2.4 million file PTR-syncing IRL client took 8 minutes. misc First off, some quick fixes: I fixed unnamespaced wildcard tag searches (e.g. 'sam*s'), which were recently not producing namespaced results. I also improved handling in the new ICC profile system when the ICC profile with an image being loaded was completely borked. Also, it seems the login-required 'gelbooru favourites by user id' downloader recently(?) broke severely--as well as pulling favourite links, it was also parsing and visiting the 'remove from favourites' link and deleting them! I fixed the gelbooru gallery parser to never pull a delete link again, but if you used this parser to grab all your favourites, please check your favourites list, and if you can, dig that downloader back out of hydrus and do a mass 'open sources' on the file log or the thumbnails so you can re-favourite any files that were dropped. Thanks to the users who noticed what was going on here and figured out what needed to be fixed. I added some new options. The 'default tag service in manage tags' choice is reset this week, and it now starts off working different: it now remembers the last used service the next time you open a tag service dialog. Let's see if this works out, but if you don't like it, you can go back to a fixed default under options->tags. There's also a new checkbox in options->search that lets you default new file search pages to 'searching paused' rather than 'searching immediately'. ipfs and deleted files search This is mostly database prep for future multiple local file service expansions. If you have an IPFS service, you can now search it in a normal search page. Just switch 'my files' on the autocomplete dropdown to your IPFS service and you should be able to search it with tag counts and everything. IPFS works a little differently to a normal file service in hydrus, so this will need some more work to get those workflows integrated. Also, while an IPFS service in hydrus only knows about your pins atm, in future I would like hydrus to harvest more info from external sources so this search space could potentially populate with remote files that you could then command the client to download. In a related but quieter move, I did the same thing here for a new 'deleted files' umbrella domain. It'll take a few minutes to calculate this search cache on update. This will be of use in the near future when I let advanced users start searching deleted files. icc profile and pixel hash This is mostly database prep for future duplicate system expansions. The client database now records whether still images have an ICC profile, and it also saves data for 'these images are exact pixel duplicates' decisions. On update, all your existing files will be queued for scans to fill in this data in the background. Anything with an ICC profile will also regenerate its thumbnail. You don't have to do anything, this will all happen automatically over the coming week(s). In time, you'll be able to search for images with ICC profiles with the new 'system:has icc profile' search predicate. This predicate is weird and advanced, so I think I'll hide it away soon under an umbrella for advanced stuff. The 'exact pixel duplicate' data will be useful in the near future, when I expand the duplicate system to find (and optionally automatically merge) certain pairs that are perfect visual dupes. full list - misc: - fixed a recent bug in wildcard search where 't*g' unnamespaced wildcards were not returning namespace results - sped up multi-predicate wildcard searches in some unusual file domains - the file maintenance manager is now more polite about how it works. no matter its speed settings, it now pauses regularly to check on and wait until the main UI is feeling good to start new work. this should relieve heavier clients on older machines who will get a bunch of new work to do this week - 'all local files' in _review services_ now states how many files are awaiting physical deletion in the new deferred delete queue. this live updates when the values change but should be 0 pretty much all the time - 'all known files' in _review services_ also gets a second safety yes/no dialog on its clear deleted files record button - updated the gelbooru 0.2.x gallery page parser, which at some point had been pulling 'delete from favourites' links when running the login-required 'gelbooru favorites by user id' downloader!!! it was deleting favourites, which I presume and hope was a recent change in gelbooru's html. in any case, the parser now skips over any deletion url (issue #1023) - fixed a bad index to label conversion in a common database progress method. it was commonly saying 22/21 progress instead of 21/21 - fixed an error when manage tags dialog posts tags from the autocomplete during dialog shutdown
[Expand Post]- fixed a layout issue with the new presentation import options where the dropdowns could grow a little tall and make a sub-panel scrollbar - added handling for an error raised on loading an image with what seems to be a borked ICC profile - increased the default per-db-file cache size from 200MB up to 256MB
- some new options: - the default tag service in the manage tags dialog (and some similar 'tag services in a row of tabs' dialogs) is reset this week. from now on, the last used service is remembered for the next time the dialog is opened. let's see how that works out. if you don't like it, you can go back to the old fixed default setting under the 'tags' options panel - added a checkbox to the 'search' options panel that controls whether new search pages are in 'searching immediately' or 'searching paused' state (issue #761) - moved default tag sort from 'tags' options panel to 'sort/collect' - . - deleted files and ipfs searchability: - wrote a new virtual file service to hold all previously deleted files of all real file services. this provides a mapping cache and tag lookup cache allowing for fast search of any deleted file domain in the future - ipfs services also now have mapping caches and tag search caches - ipfs services are now searchable in the main file search view! just select them from the autocomplete dropdown file domain button. they have tag counts and everything - it will take some time to populate the new ipfs and deleted files caches. if you don't have much deleted files history and no ipfs, it will be a few seconds. if you have a massive client with many deleted/ipfs files and many tags, it could be twenty minutes or more - . - 'has icc profile' now cached in database: - the client database now keeps track of which image files have an icc profile. this data is added on file import - a new file maintenance task can generate it retroactively, and if a file is discovered to have an icc profile, it will be scheduled for a thumbnail regeneration too - a new system predicate, 'system:has icc profile', can now search this data. this system pred is weird, so I expect in future it will get tucked into an umbrella system pred for advanced/rare stuff - on update, all your existing image files are scheduled for the maintenance task. your 'has icc profile' will populate over time, and thumbnails will correct themselves - . - pixel hash now cached in database: - the client database now keeps track of image 'pixel hashes', which are fast unique identifiers that aggregate all that image's pixels. if two images have the same pixel hash, they are pixel duplicates. this data is added on file import - a new file maintenance task can generate it retroactively - on update, all your existing image files are scheduled for the maintenance task. it'll work lightly in the background in prep for future duplicate file system improvements - . - boring search code cleanup: - fixed a bug where the tag search cache could lose sibling/parent-chained values when their count was zeroed, even though they should always exist in a domain's lookup - fixed up some repository reset code that was regenerating surplus tag search data - with the new deleted files domain, simplified the new complex domain search pattern - converted basic tag search code to support multiple location domains - cleaned up some search cache and general table creation code to handle legacy orphan tables without error - misc tag and mapping cache code and combined local files code refactoring and cleanup next week I'll take Christmas week off, so I only have two more proper weeks in the year. I would like to have basic pixel duplicate search working before then. Just a dropdown on the duplicates page for 'pair must/must not be pixel dupes' or similar. So, I will work on that and see if we can aim for a 'clean' release for end of year. birthday My todo list reminded me yesterday that I put out the first non-experimental beta of hydrus on December 14th, 2011. This is the rough 'start date' of the project and its birthday now. It will be ten years this week, which is pretty crazy. Like a lot of people, 2021 was an odd year for me. I changed some lifestyle stuff, dropping some unhealthy habits, and also had some unexpected stress. After looking back though, I am overall happy with my work. Although I completed fewer big new projects than I hoped, and at times I felt bogged down in rewrites and fixes, the general performance of the client grew significantly this year. As well as a variety of new tag search and display options, the sibling and parent system was completely overhauled on several fronts, with the improved virtualised storage in the database and asynchronous real-time application calculation, and with that the autocomplete search finally supported 'perfect' sibling+parent adjusted tag counts in very fast time. Years-old sibling and parent application bugs were finally drilled down to and fixed. The tag lists across the program gained better sibling and parent display and commands. Wildcard searches became much faster too, and all sorts of tag and file search improved on smaller domains, sometimes by a factor of a thousand, even when a client had the whole PTR lurking in the background. We also moved to automatic repository account creation and improved serverside privacy, tags became easier to sort, file and database maintenance gained multiple new commands that saved a ton of time and inconvenience, the database learned to repair much of itself, system predicates became parseable in the Client API and editable in main UI, we moved from my duct-taped dev machines to github-built releases, the image renderer moved to a tiled system that allowed very fast zoom, sessions could grow much larger without CPU death and could save to disk with a fraction of their old write I/O, and a whole ton of other little fixes and quality of life improvements to every system. I get a lot out of working on hydrus, and I hope to continue just like this. I appreciate everyone's feedback and support over the years. Thank you! If you would like to further support my work and are in a position to do so, my simple no-reward Patreon is here: https://www.patreon.com/hydrus_dev
>>17022 > in future I would like hydrus to harvest more info from external sources so this search space could potentially populate with remote files that you could then command the client to download. Fuck yes. reminds me of >>>/hydrus/15336 where some anons discussed a decentralized booru that combined the PTR with IPFS.
Hello. I asked about 2 weeks ago if tags could be downloaded from Deviantart. I remember you saying you didn't like there chaotic tag system, and you linked to something where I might try to program it myself. Could you post that link again? I can't access the old page anymore. I thought I would look at it, and if I could do anything, as I would like the tags. Thanks!
>>17022 >If you have an IPFS service, you can now search it in a normal search page. Just switch 'my files' on the autocomplete dropdown to your IPFS service and you should be able to search it with tag counts and everything. IPFS works a little differently to a normal file service in hydrus, so this will need some more work to get those workflows integrated. Also, while an IPFS service in hydrus only knows about your pins atm, in future I would like hydrus to harvest more info from external sources so this search space could potentially populate with remote files that you could then command the client to download. Are you going to build a decentralized furry porn repository on IPFS where everything is tagged nicely in the PTR? A combination yiff party, kemono party, e621, sad panda, gel booru, etc. etc. that is entirely resistant to DMCA or anything else? Unbelievably hype.
(1.61 MB 550x550 happy birthday.gif)

>>17023 >birthday Happy Birthday!
>>17021 Yes and thanks this workaround worked! Although the only thing I actually need to copy over was libgmodule. It worked just by copying that to the one in Hydrus and starting Hydrus. The other 2 seem to be unnecessary. That means that the problem must be an issue with libgmodule.
devanon, would it be a lot of work to implement the mister bones statistics data in the client api? I would like to track statistics about usage over time using either zabbix or influxdb and have some pretty graphs in grafana. Using an API call, getting this data into a DB should be trivial. This would be very helpful, as you can see the growth rate by month, so you can plan the amount of storage needed better. If you do decide to add this, I would like to ask you to not convert anything and always reply with the same unit. For example, 20 MB and 20 GB should always be "megabytes_used: 20" and "megabytes_used: 20000" respectively. Otherwise, parsing may be too hard for really simple monitoring scripts.
Why does Hydrus's file sizes for things not line up with what my system or other tools say the file sizes are? Is it because Hydrus is actually using mebibytes and Gibibytes instead of true SI megabytes and gigabytes (factors of 1000 like other metric units)? If so, could you change the UI text to say Mib, Gib, etc. instead. It keeps confusing me when I try to do searches for filesize. Even better, could you also add an option to display and use actual SI units. That's what my system uses so it'd be nice to have consistency.
>>17030 Personally I must say that I'm heavily against changing the display units.
>>17030 >Even better, could you also add an option to display and use actual SI units. That's what my system uses so it'd be nice to have consistency. In my experience, everyone (and program) by default assumes a base of 1024 when talking about data except hard drive manufacturers because using 1000 as a base lets them sell less while consumers assume it's more (leading to the classic "I bought a 2TB hard drive but it only shows as 1.7TB" computer noob problems). Changing (or at least adding an option for) Hydrus to show the little 'i' would be a nice small touch, but isn't really necessary. >>17029 I think that the sizes returned should be in bytes. Whatever program/script you write to process this data could just divide to get whatever unit you needed, and you won't get rounding errors. Also avoids the above MB/MiB problem.
>>17032 Fine by me, I just don't know how hydrus stores this stuff internally.
>>17023 > It will be ten years this week, which is pretty crazy. The rest of us can only aspire to have such a long running personal project.
Dunno if this is a problem with Hydrus itself or with my setup, but after updating my system (including Hydrus) yesterday I cannot use any downloaders. I can access the sites I'm trying to use in my browser just fine, but when I try to use a downloader it instantly fails with a "Could not connect!' error. The log seems to point to urllib3 saying 'Name or service not known'. Is anyone else having this issue?
Are there plans to extend the notes feature (the one where it's possible to attach notes to files)? I'd like use hydrus for creating my own index of videos from a couple dozen youtube/nnd channels and I'd like to import video descriptions along with the videos into my collection, automatically, like i'd import tags, from text files i got with youtube-dl (that shouldn't be complicated, riiiiiiiiiight). It'd also be nice to be able to search those descriptions in the future. Hydrus is by far the best-suited way i found to do what i want to do and it'd be well-suited even if there was a way to import descriptions easily. Youtube's search only returns 30 or so relevant results, and there's no option to filter those search results in any meaningful way at all. And most programs out there (peertube, horahora, tubearchivist, aonahara/archive-browser) are shit for my purposes because of reasons like excessive use of resources and bad interface because they try too hard to simulate youtube.
>>17028 Ah, great, thanks for the feedback! I must have copied it last as I was following the trail of errors and concluded that all of these were needed; that's good to know.
The reddit subreddit downloader grabs posts sorted by hot, and not by new like it should. This makes the downloader tech not work well with it and subs to subreddits are completely broken.
(8.78 KB 392x198 ClipboardImage.png)

Any reason why dragging files to another tab sometimes throw this error?
>>17035 Nevermind, I had fucked some systemwide DNS settings. My browser wasn't affected because it had an alternative DNS server set in the browser-specific settings.
>>16993 I'd like to. It will be a 'big job' to add support, so I can't give a time, but lots of users want this and related systems like file alternates will share tech with it. The main things holding me back are: I need archive inspection tech, so hydrus can look inside an archive and recognise it as a cbz/cbr/ugoira I need 'multipage' media support, so a single file object in the media viewer can have a kind of 'transverse' navigation so you can browse those sub-pages without losing your position in the media carousel >>16994 >>17013 is correct. It is just tag data. I will improve the help to be more clear, thank you for letting me know. >>17002 Thanks. Yeah, I am sorry, this is a limitation of the filter right now. The code is shit, I will be completely rewriting it.
>>17009 Thanks. You have your own preferences, so do whatever you like, but I think from my experience, I would suggest rather than moving back and forth over and over, I would have my 'main' db stored on the local SSD and just back it up regularly to the NAS using rsync or freefilesync or whatever. You'll still have a working hydrus 'in one piece' on your NAS via the backup, but you'd also have a 'live' always easily accessible on your SSD. No need to remove the db from your local SSD, I think. >>17024 This is the super dream. I want the client to know more about remote files in future. I've been talking with some users about a PUR (public url repository), or a PLR (public location repository) that could know about hashes too like IPFS multihash. As you say, cross reference that data with the PTR and you could run a 'gallery' search instantly. I'm still mostly thinking about all this tech, but IPFS is ultimately cool and I want to try more with it in future. >>17025 Yep, check 'making a downloader' section here on the main help: https://hydrusnetwork.github.io/hydrus/help/index.html That help is also on your hard drive with the install, just click help->help in the client.
>>17026 As here >>17042, yeah, hopefully. Basically bootstrap a P2P network for imageboard style files. There is lots more work to do though. >>17029 Sure, I think so! Mr Bones is just a python dictionary (a JSON 'Object') of numbers really, so I can just pass that to you. All the numbers are bytes, and actual integers, should be easy to parse. >>17030 Thanks. I will make a job to add some options so you can display it different. You are right, I use multiples of 1024 and mean to say 'megabyte' and so on. I've seen MiB appear in some other programs recently, I didn't even realise that was the 'proper' way around. I'm probably not going to switch over my words personally when I am talking about data, MiB etc... just looks too weird to me, but I can add options for different units.
>>17027 >>17034 For real it passed by in a blink. I'm 35 now and in the past couple years I've almost fully lost the ability to notice the calendar changing. 9/11 was twenty fucking years ago, I can't believe it. >>17036 Yes, I would like to. Unfortunately it keeps getting delayed. Main list of desired improvements: Note parsing with downloaders Note preview/display in media viewer background Note access via API Note import from/export to json/xml files on disk Proper fast note search showing danbooru-style translation boxes on images based on note json content It will take time I am afraid, but I will get there in the end. >>17039 Shit, thank you, I'll check that out. It is just a stupid bug, looks like the container for the page tabs is catching the DnD for some reason.
>>17042 Thanks! Will give it a look.
(321.04 KB 1280x1166 check'em.png)

>>17044 >digits Almost quads, off by one but close enough. >Note stuff It's highly expected.
>>17043 >Sure, I think so! Mr Bones is just a python dictionary (a JSON 'Object') of numbers really, so I can just pass that to you. All the numbers are bytes, and actual integers, should be easy to parse. Sounds great, thank you! Just take your time, it's really is not an important feature and it can easily wait for next year, if you have anything more important to do or want to take time off! Thanks for the great software!
(11.72 KB 266x161 ClipboardImage.png)

Hey, dev. Do hydrus work with python 3.10? Just got this on Arch. If so, i'm currently on 457, do i need to upgrade ir it's fine?
>>17048 I'm using arch too so I updated. Hydrus 457 doesn't work with python 3.10 from the source. You'll have this error. I'm downgraded python from 3.10 to 3.9 and everything works fine. --error starts here 2021/12/13 15:31:05: hydrus client started 2021/12/13 15:31:05: booting controller… 2021/12/13 15:31:05: booting db… 2021/12/13 15:31:05: checking database 2021/12/13 15:31:05: preparing db caches 2021/12/13 15:31:05: initialising managers 2021/12/13 15:31:06: booting gui… 2021/12/13 15:31:06: starting services… 2021/12/13 15:31:06: Running "client api" on port 45869. 2021/12/13 15:31:06: services started 2021/12/13 15:31:54: Exception: 2021/12/13 15:31:54: TypeError arguments did not match any overloaded call: Traceback (most recent call last): File "/home/yourname/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/home/yourname/hydrus/client/gui/pages/ClientGUIResults.py", line 4517, in WaterfallThumbnails self._FadeThumbnails( thumbnails ) File "/home/yourname/hydrus/client/gui/pages/ClientGUIResults.py", line 2540, in _FadeThumbnails bmp = thumbnail.GetQtImage() File "/home/yourname/hydrus/client/gui/pages/ClientGUIResults.py", line 4829, in GetQtImage raw_thumbnail_qt_image = thumbnail_hydrus_bmp.GetQtImage() File "/home/yourname/hydrus/client/ClientRendering.py", line 1001, in GetQtImage return HG.client_controller.bitmap_manager.GetQtImageFromBuffer( width, height, self._depth * 8, self._GetData() ) File "/home/yourname/hydrus/client/ClientManagers.py", line 183, in GetQtImageFromBuffer qt_image = QG.QImage( data, width, height, bytes_per_line, qt_image_format ) TypeError: arguments did not match any overloaded call: QImage(): too many arguments QImage(QSize, QImage.Format): argument 1 has unexpected type 'bytes' QImage(int, int, QImage.Format): argument 1 has unexpected type 'bytes' QImage(bytes, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(bytes, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(List[str]): argument 1 has unexpected type 'bytes' QImage(str, format: str = None): argument 1 has unexpected type 'bytes' QImage(QImage): argument 1 has unexpected type 'bytes' QImage(Any): too many arguments QImage(): too many arguments QImage(QSize, QImage.Format): argument 1 has unexpected type 'bytes'
[Expand Post] QImage(int, int, QImage.Format): argument 1 has unexpected type 'bytes' QImage(bytes, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(bytes, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(List[str]): argument 1 has unexpected type 'bytes' QImage(str, format: str = None): argument 1 has unexpected type 'bytes' QImage(QImage): argument 1 has unexpected type 'bytes' QImage(Any): too many arguments --error ends here
Did that update to the pixiv artist downloader happen yet? Is there something I have to do if it did, like delete the old parser or something? That url format mismatch between what hydrus-companion sees and what the downloader uses is a thorn in my side atm. One of these days I gotta learn enough html to figure out how to make these things.
>>17049 >>17048 It took them a few hours to rebuild all the repo Python packages, but if you run a full upgrade you should be fine now. Probably also a good idea to rebuild any AUR packages that use Python as well (including Hydrus). t. Hydrus was broken this morning but works now
>>17041 If I tried to import an CBZ file into Hydrus right now, what would happen?
Found an obstruction with the behavior of the gelbooru downloader, although it could happen to other ones as well. The situation is such: there are two gelbooru items: the "original version" (https://gelbooru.com/index.php?page=post&s=view&id=3587774) and an "edited version" https://gelbooru.com/index.php?page=post&s=view&id=6353940). The problem arises from the fact that the "edited version" has an exact link to the "original version" as the its stated source. So it goes like this: the downloader downloads the "edited version", associates both the url it was fed (https://gelbooru.com/index.php?page=post&s=view&id=6353940) and the url it parsed from the page as the source (https://gelbooru.com/index.php?page=post&s=view&id=3587774) with the file. After that, if the downloader is fed the link to the "original version" it does not even try to download the file from that page because it fully believes that it has already downloaded that exact file file previously. Currently not sure how to avoid this except editing the parser to test the source url for presence of the gelbooru domain and then discarding it, if it is found.
>>17049 I still seem to be having this same error after updating to Python 3.10.1 on Arch. Installed from aur, and I did tell yay to rebuild. Happens when I double-click a tag to view the images. Also get this error when closing the services -> manage services window: TypeError setHeight(self, int): argument 1 has unexpected type 'float' File "/opt/hydrus/hydrus/client/gui/ClientGUIScrolledPanels.py", line 99, in sizeHint size_hint.setHeight( screen_fill_factor * available_screen_size.height() )
I had a great week. I managed to improve the duplicate filter search more than I thought, adding the ability to filter based on pixel duplicates and also pair similarity, and then I was able to rework the video scanbar so it sits inside the video frame and autohides based on mouse position. The release should be as normal tomorrow.
>>17054 >>17049 Thank you for these reports. Even if this is just due to some version of python with too-strict type checking that will be fixed a bit later as >>17051 says, I saw that I was sending a couple of floats here when I should have been sending ints. I believe I've fixed them for tomorrow's release, but if you continue with this in the next hydrus version please let me know if you encounter anything else!
https://www.youtube.com/watch?v=kWXAL9bHciQ windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v466/Hydrus.Network.466.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v466/Hydrus.Network.466.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v466/Hydrus.Network.466.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v466/Hydrus.Network.466.-.Linux.-.Executable.tar.gz I had a great week. The duplicate filter now supports filtering by pixel duplicates, and videos now fit better in the media viewer. better duplicate search Now we have pixel duplicate data stored in the database, I can search it. The duplicate filter page now has a dropdown that lets you select 'must be pixel dupes', 'can be pixel dupes', and 'must not be pixel dupes'. It does exactly what you think, and with luck it should be pretty fast no matter what you select. Note that you may not have all your old pixel dupe data calculated yet. I started it for everyone last week, but even if it is doing 10,000 files a day, if you have a big client it will take a little longer. It may also still be working on the ICC profile queue. Hit up database->file maintenance->manage scheduled jobs to see how it is doing. You can rush it there if you like. As I was working in this system, I discovered that at some point I had started recording at what search distance potential duplicates were matched. This is to say if you were set up to search at distance 4 and two files were matched as potential duplicates with a distance of 2, it would save that 2 into the database. I am not totally sure how retroactively accurate this data is, but I've added a control for it to the filtering panel too. You can now tell the filter to only present files that are 'exact match' at distance 0 and it should work mostly ok. Some pretty complicated database work went into this. The most complicated search joins seven different things together. I know it is fast on my test machine, but if you have a really large client with a lot of searched dupes, some search types may be unbearably slow. Please let me know how you get on, and I'll optimise what I can. scanbar The audio/video scanbar in the media viewer is now embedded inside the media frame. It shows and autohides when your mouse moves closer and away. It now means you can now go borderless fullscreen on a 16:9 display and finally have a 16:9 video fit perfectly! I have wanted to do this for a really long time, but some of the layout code here is really awkward, and getting widgets to pop on top of each other can be tricky. Thankfully it turns out Qt has a nice way to do it, so I've now hacked this together. There is still a small amount of jank--the scanbar and volume control are currently separate objects, so sometimes they'll show/hide in separate frames, and you might also see the scanbar nub pop in a frame late, but I can work on these issues in future. I will also add some options so you can change the size of the show/hide activation area around the scanbar. But for now, I am pretty happy with this. If you are a keyboard user, please check out the new shortcut in the 'global' set that flips on/off a 'force the animation scanbar to show' mode. I don't really want to bring back the old always-on hanging-below scanbar, so I hope this will be a good enough substitute. But let me know, and if you really hate this new scanbar, we'll see what we can do. full list - video scanbar autohide: - the scanbar that shows below audio and video is now embedded inside the video frame, and it show/hides based on how close your mouse is to it - I've wanted to do this for a long time, since it will allow you to watch 16:9 videos at true 100% in borderless fullscreen, but the hackery of how the media viewer works behind the scenes means this took more work than you'd think and is still a little jank. there's a small amount of flicker when it pops in and out, which I will work on in future. in any case, please have a play with it and let me know what you think. I expect to add some more options, like for the activation padding area around it, and I will be tidying up more layout stuff throughout the media viewer - if you are a mostly keyboard user, please check out the new 'global' shortcut to flip on/off a 'force the animation scanbar to show' mode - I don't really want to bring back the always-on hanging-below scanbar that just takes up space, but if you try this new embedded scanbar and really hate it, we'll see what we can do - . - more duplicate filter search options: - the duplicates page now has a dropdown on the search for 'must be/can be/excludes pixel dupes'! - the duplicates page now has a number control on the search for what distance the pair was found at! I am not sure how accurate this thing is in all cases, but it seems I started tracking this data some time ago and forgot I even had it - these new options are remembered in your session and _should_ remain fast in most normal cases. I put time into some complicated database work this week to get this going, please let me know if you have any trouble with it
- misc: - when the export filename pattern in the export files dialog means many of the files share the same base and hence need to do 'filename (5)'-style suffixes to be unique, the number here is now calculated much more efficiently. opening this dialog on 10,000 files with an oft-duplicate pattern should now be a jolt of lag but not many minutes - when you choose to 'separate' a subscription with more than 100 queries, you are no longer forced to break it into half - when you do break a subscription in half, it now makes sure to sort the query texts before separating - if you are in advanced mode, the 'selection tags' list on the left of every page can now switch its tag display type between 'multiple media views', 'display', and 'storage'. this is experimental and a bunch of stuff like 'select files with this tag' won't work yet - janitors' petition pages now start with their tag list in 'storage' mode, so you can see the actual tags being changed rather than with siblings and parents calculated - rebalanced some janitor mapping petition weights. jannies _should_ see a smoother balance of 'lots of small petitions' vs 'a few larger petitions' amongst petitions all with the same reason and creator - . - boring cleanup and little fixes: - when you set the checker options in the edit subscription dialog, the queries now recalculate their file velocity better. previously, they would just set 'unknown' and recalc on the next run, but now they will actually recalculate if the query container is loaded into memory or otherwise put a status that says 'will calculate on next run' - removed the 'should be namespaced' reason from the manage tags quick petition reasons. this is now all handled by siblings, tidying up storage tags manually is busywork - when you click 'copy traceback' on an error popup, it also copies the software version, your platform, and if you are on a frozen build or running from source - the logger now prints version number for every block, just before the timestamp - cleaned up a variety of media viewer UI code while working on the scanbar, fixing some misc display bugs - moved pixel hash storage responsibility from 'file metadata' to 'similar files' module - the similar files system now searches pixel hashes when it is called to do any similar files search. they count as 'exact match' distance - when a file gets a new pixel hash, it now sees if any other files have that same hash. if so, it now gets queued up again in the similar files search system, ensuring this match is not missed - misc nomenclature cleanup--since we now have both 'pixel hashes' and 'phashes', phashes are now referred to as 'perceptual hashes' everywhere - massively refactored the primary table join that drives potential duplicates search. it should work a bit faster now and it is much easier to work with - I added pixel dupe and distance search to the standard search results version of the join and the 'system:everything' version, which has several optimisations - silenced some shutdown handling in file maintenance that was being printed to log as an error - fixed some 'broken object load' error handling to print the timestamp of the specific bad object, not whatever timestamp was requested. this error handling now also prints the full dump name and version to the log, and version to the exported filename. I was working with a user who had broken subs this week, and lacking this full info made things just a little trickier to put back together - fixed some drag and drop handling where it was possible to drop thumbnails on a certain location of a page of pages that held an empty page of pages but it would not create a new child media page to hold them - misc serverside db code cleanup - fixed python 3.10 type bugs in window coordinate saving and Qt image generation from buffer (issue #1027) next week With Christmas coming up, this will be the last full work week of my year. I want to have just a simple cleanup and small-fixes week so I have a fairly unambitious and 'clean' release before I go on holiday.
>>17057 >The audio/video scanbar in the media viewer is now embedded inside the media frame. It shows and autohides when your mouse moves closer and away. There's problem. The scanbar will show up only in full screen mode and when the mouse moves from "outside" the video frame, not inside. Let me explain, my screen is 1366x768, videos of 1920x1080 will fit in the native resolution, however the mouse won't trigger the scanbar to show up. So, the only solution is zoom out the video to make show to an inferior resolution and allow the mouse to approach the video's lower border from outside the frame in order to the scanbar to show up.
I have a small handful of files (all from the same Pixiv log) which suddenly don't have thumbnails even after regenerating and are invisible in the viewer, but which produce the proper images if exported. I noticed this after updating to 466, but it could've been like that for a while since they wouldn't really be in any of my most recent searches. Looking, it seems they have ICC profiles... is it possible the ICC profile is fucked in a way that turns them invisible?
(65.09 KB 1637x647 Untitled.png)

>>17003 So I finally got around to doing this, and it thankfully didn't take the "couple of days" I was prepared to leave it untouched. Also when following the instructions (which I am extremely grateful included literally every step of the process, and even an image for reference, that even has its own note, thank you), I was confused about the "New" part of the installer. So the first time around, I hadn't actually installed "ddrescue" at all, which made my heart sink when I tried running it the first time. But thankfully I could just install it without needing to restart my PC or anything. But the part that confused me is that ddrescue didn't find any "bad sectors" by its own standards. I guess it isn't a "bad sector" check, since its only measure for "bad sectors" is whether or not it has trouble reading data? Either way, since I only have a laptop, I had to shut down my PC and replace my secondary hard drive slot with the bad sector HDD, then boot from the primary (non-bad HDD) normally. Windows tried to run "chkdsk" during boot, and I had 10 seconds to hit any key to cancel it. Scared the shit out of me. But, I cancelled it in time, of course. Also ddrescue produced two other files alongside the image, which has the timestamp and stuff, in case when backing up the image, you don't preserve the timestamp of when it was made, I guess. After I copy this image to my first external 5TB HDD for a backup, I'll be ready to try performing recovery on this image. But besides this having happened in the first place, this is the most depressing part. I don't expect anything to be saved, let alone my hydrus ever being able to boot again. So I don't want to hope or expect anything. But yeah.
>>17042 >>17043 >This is the super dream. I want the client to know more about remote files in future. I've been talking with some users about a PUR (public url repository), or a PLR (public location repository) that could know about hashes too like IPFS multihash. >As here >>17042, yeah, hopefully. Basically bootstrap a P2P network for imageboard style files. If you do this you should ensure you can connect over TOR to hide your IP address. I can imagine some of the furry spergs joining and recording everyone's IP.
>>17057 Loving the new features in the duplicate system! Being able to set the search distance without resetting potential duplicates and doing a new scan is a gods send! Not sure what else you changed but suddenly I have 5000 new potential dupes at search distance 0 while i had it at 0 before. Mostly false positives but a few real duplicates too!
>>17061 Sorry, I had a lot of stuff to do, so I can only get back to you now. The mapfile contains basic information that ddrescue needs to resume, so it knows which sectors it has already copied, where it stopped, what bad sectors to retry and so on. It is only used when something goes wrong with the power supply and you have to resume copying the disk. A "bad sector" should usually be the drive telling you "I can't read this, so you won't get data." ddrescue counts how often the drive says this and tries the parts of the drive that gave read errors at the end again. The drive did not fail once, so it seems like all the decrypted stuff was written in such a way that it can be read without issues, or the drive has no idea that it returns garbage. Windows also seems to know the amount of used data on the original drive, so the filesystem seems like it is at least not completely dead. In any case, I assume you have copied your image to another drive by now? If not, please do. I will play a bit with my test setup, so I can provide clear instructions on what the next steps will be.
>>17061 Sorry, I had a lot of stuff to do, so I can only get back to you now. The mapfile contains basic information that ddrescue needs to resume, so it knows which sectors it has already copied, where it stopped, what bad sectors to retry and so on. It is only used when something goes wrong with the power supply and you have to resume copying the disk. A "bad sector" should usually be the drive telling you "I can't read this, so you won't get data." ddrescue counts how often the drive says this and tries the parts of the drive that gave read errors at the end again. The drive did not fail once, so it seems like all the decrypted stuff was written in such a way that it can be read without issues, or the drive has no idea that it returns garbage. Windows also seems to know the amount of used data on the original drive, so the filesystem seems like it is at least not completely dead. In any case, I assume you have copied your image to another drive by now? If not, please do. I will play a bit with my test setup, so I can provide clear instructions on what the next steps will be.
>>17064 Don't apologize, the downtime between your post and mine was far greater. I am just grateful to receive your help at all in any capacity.
>>17050 Yeah I rolled in an update so (iirc) now the 'artist' search delivers new format urls like the 'tags' one does. You don't have to do anything, just update to 462, it looks like. Unfortunately, hydrus does not have mass scale URL conversion tech yet, nor anything like 'read this url as this url instead', so all your existing known urls that use the old format are still in your database and Hydrus Companion (and hydrus in general) doesn't know how the old URLs line up with the new ones. Although it sucks to say, in the meantime your hydrus will slowly inform itself about the new URLs as it runs general downloads in subscriptions and so on, and your HC knowledge will catch up. In future, I want the database to be able to translate one URL class to another and fix this problem. >>17052 A CBZ is actually a zip; a CBR is actually a rar. Hydrus will think they are just zip/rar archive files and import them without a thumbnail. It will also change their file extension to .zip or .rar, so if you have .cbz etc... set up with an external comic reader, it will be more of a pain in the neck to try to open them in that from hydrus. So I'd say avoid it for now and keep using ComicRack or whatever. But feel free to try it out if you like. When I eventually get this scanning tech in, all the imported zips that are really cbzs (basically hydrus will roughly recognise a zip with numbered image files inside) will be automatically detected as cbz and just convert in the client and get a nice thumbnail.
>>17053 Wowowow, thank you, this is very interesting! Most of the problems here we've seen are just a twitter profile link as the source, but that's handled in hydrus with URL classes. Ok, I think the best practical answer here is to say if a site provides a source URL that is in the same domain as the site itself, I discard that as a source. It isn't what we consider a source URL, so ditch it. >>17059 Thank you. I am afraid I do not get the same behaviour (it pops up fine for me in borderless and when the mouse starts inside), so something else is going on for you as well. I noticed one instance in my IRL client where the scanbar wouldn't pop up for a bit in the preview viewer until I had moved the mouse out of its activation area, so there is obviously still some dodgy logic here. I will give it a full pass, please let me know if I improve things for you in 467!
>>17060 Can you post some of these files, or their URLs, so I can check them out on my end? >>17062 Yeah, I recommend everyone use some sort of system-wide VPN tech as much as possible. I hope to improve hydrus's internal network engine too, in future, to allow more complicated proxy situations. We have some very basic proxy support right now, but we can do a lot more. I am no TOR expert, but I believe there are ways of hosting a TOR socks service on your own computer, right, and then you can dial that into a program's socks proxy settings and it'll just do http stuff over TOR? Of course if we were bootstrapping a p2p service through IPFS, this network question would be on the IPFS daemon rather than hydrus itself, which would only be talking to that daemon's API on localhost.
>>17069 Here's a link to the Pixiv log: https://www.pixiv.net/en/artworks/74252729 Word of caution that it's explicit and gay. I wish I had a different example, but I haven't encountered any other files that exhibit this behavior.
>>17063 Cool! The new system would have (re)done some 'potential duplicates' searches as it calculated pixel hashes, and the part where it does duplicate potential discovery now consults pixel dupes as well as the old similar files system. I'm not sure why it is finding quite so many new ones. It could be it is really finding some new ones--maybe when it did the first search, my search code was borked and it missed them that time--but as I said in the release I think some of the search distance record is slightly inaccurate. Some user-driven events like 'set these as potential' set a distance of 0, so maybe there's some odd legacy stuff going on here. If you come across some more false positives at 0 distance, could you post some here, or post URLs to them? I don't need a hundred, but a couple would be useful to check out on my end. >>17070 Thank you very much! No worries about the content. I get the same thing, they show blank. I'll check these out this week.
>>17067 So can Hydrus import literally any file even if it can't do anything with it?
>>17072 It cannot. I tried importing and .AVIF a while back and the file importer failed to parse it as an unknown file type.
Few suggestions for Hydrus, and 2 questions: >Add tags based on search Like to do system:filetype is video apply medium:video tag to all files. >Invert tags based on search To extend the above, the ability to Invert them, so like system:filetype is video Anything that doesn't match that will have that tag removed. Preferably offer these as a monitoring option too, so users can have these automatically apply to newly imported files. This can help deal with all the millions of photos that are labeled as videos and so on. Although maybe it can be a bit dangerous for users who don't know what they're doing? eh. But god I HATE searching medium:video and being presented with 5,000 photographs. >Flag Files Another added idea would be the ability to flag items on the PTR. Basically this flag would create a queue of files for all users with those same files. This queue can then be gone through at will by users easily. The flag basically would indicate that the file could use some tags. It'd help point out files missing tags. The queue would be interactive, so it would be kinda like the duplicate checker. It can show the files, on the right offer a box to enter tags. Users can also be given this option to disable flags entirely of course. Users can also be given the option as to what type of files they want to show in the queue (for example if they only care about adding tags to video files, etc.) This would help immensely for finding and tagging files with little to no tags present. Especially for users with reasonably large-massive collections. >DB Backup Lock When users have databases in multiple locations, it would be nice to be able to do backups without needing to close hydrus. Hence the ability to lock the db with Hydrus open. Basically you would have the "update database backup" button back, and then you click it, it would prompt stating something like "The database is spread across multiple locations, Hydrus does not support backing up automatically. Please backup manually." Then present a button to lock the database. This will remain locked until the user presses the "Unlock database" button. This can be used to safely backup the database, media files, thumbnails, etc. >Hydrus content lock The option to lock hydrus (like password protect it) does exist as a setting, however it is unable to complete any jobs, imports, or run anything such as down-loaders and watchers etc. The ability to lock hydrus, but keep shit running in the background (just make it inaccessible from prying eyes) would be insanely nice. >Interface I know an interface redesign was done a while ago, however on Windows the interface looks like absolute fucking garbage still honestly. Mixed text capitalizations all over the place, crappy boxy buttons. On linux it doesn't look bad because qt5 (or gtk whatever it uses I forget). >Removing/Adding tag When you add a tag via right click file > manage tags Lets say I add the tag 'test'. I double click it, it's added to all selected files. Now I click it again, it's removed. There's zero indication it was removed. Please add the option to enable a confirmation window when removing tags from this box. If the tag is already synced to the PTR, it asks for a reason to remove, but not if it's not synced.
[Expand Post] >Unnecessary junk Tons of unnecessary features to have built in, just added bloat. Just make these user applyable plugins or something. Like for example no need to have local booru built in. Just make a plugins menu, and they can click install if they want it. Can even realistically do the same with the PTR, Client api, UPNP, downloader, watcher, duplicate checker, etc. All these components can be removed, and start Hydrus minimal, and just give users the power to only install what they want. They can have a super minimal install, or have absolutely everything with the click of a button. I get this is a lot of work, but PERSONALLY I'd strive for this long term if it was me. Much better security, less bloat, etc. >Question about rebuilding hydrus Question regarding users who have db, media files, and thumbnails all separated. What folders are needed to successfully and safely rebuild hydrus on a fresh install? Do you just download hydrus, then go add the media folder, and the database folder locations, and hydrus can automatically generate thumbnails right? Do you even need to import the database folder if you solely use the PTR? Because it will grab all the tags you've uploaded. Thanks anon, sorry for the wall of text. Wanted to be as clear and thorough. Please keep up the good work, very appreciative of what you do. I love Hydrus, but it is not perfect. I hope to see it perfected.
One more idea >Tag spaces as individual tags a suggestion to improve tags is to offer to have tags with spaces act as two different tags when editing. Kinda confusing but let me give an example. Lets say I make a tag: 'shaven vagina' Hydrus can automatically recognize the space, and make shaven and vagina as two separate tags. So basically I can tell Hydrus, if anyone makes the tag 'shaven pussy' it will automatically change to 'shaven vagina'. Because Hydrus knows vagina is the ideal tag, and pussy is a sibiling. Basically it's looking at 'shaven vagina', seeing two tags 'shaven' and 'pussy' and seeing if there is an ideal for each of those tags, and if there is, just swapping them. This way you don't have to deal with people who have multiple tags like 'shaven cooch' 'shaved pussy' 'shave vagina' 'vagina shave' etc. Based off those above, as long as vagina already has siblings of cooch and pussy saved to it's tag, it will automatically swap to the ideal (vagina) when those tags are presented in any string. It would also do what I showed in the last one (vagina shave) where it can look at the ideal, but also swap them around to "shaven vagina". Because it found the ideals of those 2 tags (vagina > vagina; shave > shaven) and also realized after correcting, there is a tag that matches those two so it swaps them around. Basically my point is, this can solve a shitload of issues with misspellings, different naming schemes, or more detailed tags like "nipples poking through shirt" and shit. It's easy to expect ideals for common words, but when you start getting into strings then it gets a bit much. Maybe this can be done on a optional per tag basis instead of automatic, because I 100% can understand how this can be very prone to errors. Maybe allow the user to click a button or something to enter a seperates mode. Hope I explained that well enough.
is hydrus affected by log4j vuln?
>>17068 >Thank you. I am afraid I do not get the same behaviour (it pops up fine for me in borderless and when the mouse starts inside), so something else is going on for you as well. I noticed one instance in my IRL client where the scanbar wouldn't pop up for a bit in the preview viewer until I had moved the mouse out of its activation area, so there is obviously still some dodgy logic here. I will give it a full pass, please let me know if I improve things for you in 467! Moar info: The preview window has the same behavior, the scanbar will only popup if the lower edge is approached from outside the video frame (sector highlighted in red), as seen in pic 1. Inside the video frame the scanbar will only popup under the tags' column (sector highlighted in red), as seen in pic 2. Also approaching the lower right vertical edge (sector highlighted in red), as seen in pic 3.
>>17069 > I am no TOR expert, but I believe there are ways of hosting a TOR socks service on your own computer, right Correct. On linux you only need to install the torsocks package and it 'just werks'. On windows if you run the tor browser it will accept socks proxy connections. The used to have one you could setup as a windows service but not anymore. I wish there were a "torsocks on windows" application. >you can dial that into a program's socks proxy settings and it'll just do http stuff over TOR? This is technically doable but has some issues. I tried running hydrus over TOR briefly to download from e621 and it was unusable. Several websites require connections from TOR exit nodes to either solve a captcha or block them entirely. To make matters worse because you switch exit nodes regularly (every 10 mins iirc) even if you don't have to resolve a captcha that specific node might be temp banned for high usage or any other number of things. With hydrus specifically there's no automatic retries or knowledge of this so it would essentially fail as many queries as possible until the nodes changed and you would manually need to retry them. The lesson here is that trying to rely on TOR for access to clearnet sites with any kind of reliability is probably foolish. >Of course if we were bootstrapping a p2p service through IPFS, this network question would be on the IPFS daemon rather than hydrus itself, which would only be talking to that daemon's API on localhost. Right. You might be interested to know there's currently a booru that works off of IPFS and is hosted over several meta-networks including TOR: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/ Their source code is also available so it might be useful. I appreciate you doing support like this on a platform I can post on from TOR btw. If this were github or whatever I wouldn't bother.
(257.50 KB instructions.zip)

>>17066 Alright anon, I think I found a way to inspect the image on windows. Please only do this once you are sure you have two images, one we can work on (and potentially destroy) and one image that we can copy over and over if we mess up the first one. To make sure there is no possibility of a mistake, please put these two images on seperate drives and unplug the drive with the known good image when you are not using it. Never operate on the known good image directly, always copy it! Remember, one wrong instruction here and the image is unusable! First, we want to list the files in the image as is, without any sort of file system check. We do this by downloading a third-party tool, called testdisk. You can find it here: https://www.cgsecurity.org/Download_and_donate.php/testdisk-7.1.win64.zip I encourage you to read up on it, it is a very reputable tool. You can download the attachment here, it contains some screenshots I will be referencing. Extract the zip-file somewhere (I extracted it in my Downloads directory). Then, copy the path from the explorer window, open cmd as admin and type cd <path/to/your/extracted/zip>. After you are in this directory, type testdisk_win.exe <Path/to/your/Image/that/we/can/destroy>. Make sure that you are entering the path to the image. Your screen should look like 1.png. Double check that you have two images, before hitting enter here! Execute the command, pick your disk-image. If there are multiple disks, you did not specify the image path - press q and add the path of the image. It should look exactly like 2.png (filenames and sizes can differ, of course). Next, you need to select the type of partition table. The hint guessed correctly for me, but it should either be "Intel" or "EFI GPT". I think that win7 already supported GPT, not sure if it is used as default though. You probably don't need to analyze anything, just switch to "Advanced" (4.png), pick the biggest partition (don't worry about it saying broken), and pick "List" in the commands at the bottom. This will hopefully give you a directory listing similar to mine (6.png). You can enter directories, picking ".." will bring you back up one level. From there, take a look around and see if there are some files that you are currently missing still present in the image. If there are, select the file you want to restore, hit "c" on your keyboard - (copy_file_1.png). You need to pick a destination for recovered files (this is where recovered data will end up - see copy_file_2.png). Hit c again to confirm the destination. This should bring you back to the directory listing, hopefully with a message that the file has been restored successfully. If this works (check the file you restored), go to your root directory, press a and then C (That is SHIFT + C). This will copy all files to your destination directory, hopefully everything will be there. When you are done, just hit q until you are back at the cmd promt, then quit. Please inspect whether or not the files are corrupted - they very well might be. As always, if something does not work the way it should, please ask before trying anything out of the ordinary. Good luck!
(31.48 KB 1429x505 Untitled.png)

>>17079 Hello, thank you. Thank you for the step-by-step instructions. It's currently doing its thing, and, I don't mean to jinx it, but of what it's attempted to recover so far, it hasn't failed to recover anything; it's only succeed. Also, sorry for not answering before on whether or not I've copied the ddrescue image to my other 5TB external already- I had, shortly after I made the post saying that I'd created the image. Sorry. So the first 5TB external has the old image created from non-standardized software, plus the ddrescue image, and the second 5TB external only has the ddescue image. Anyway, I'm making this post before testdisk finishes its thing because, I am confused on what it's copying. Also I didn't try only copying one file to check if it was corrupted; I just selected all and copied all. But, what I'm trying to ask is, you can see >>16998 that after I had put the non-standardized image of my bad sector boot HDD onto a replacement boot HDD, then performed Windows chkdsk on said replacement HDD, in my hydrus database alone chkdsk poofed 1834 files (4,722,022 - 4,720,188), which was 22 gigs (1,778,368,525,881 - 1,756,524,743,083 = 21,843,782,798). That was virtually the entirety of what chkdsk poofed from the entire drive, yet testdisk has copied way more than that as far as file count is concerned, in only my hydrus database. My output destination was my boot drive, which only has 60 or so gigs of free space. Right now I have about 30 left. Should I just cancel and restart, using the 5tb external which has 2.72 TB of free space as the output destination instead? I only saw a few dozen or so things to select to restore. I didn't think it would possibly copy my entire hydrus database. I have no idea what it's copying. But, again, I don't think my less than 30 gigs of free space left on my boot drive is enough. If I get down to less than 20 or so without you clarifying, I'll just restart it. Also I didn't know how to specify a new folder destination anyway, cause I didn't make one beforehand, so it's kind of messy, just defaulting output into where testdisk's guts already occupy.
(31.76 KB 1429x505 Untitled.png)

>>17080 Responding to myself just to say, it's so slow, and I have no clue what it's copying, that I'm just cancelling it now, even though I said I'd cancel it if my boot HDD space goes below 20 gigs, and so far I still have 25 left. I'll just do it overnight. My HDD freezes a lot when I'm writing data to it sometimes, even though my current boot HDD is a new replacement that's just a month old. It's just cause it has so little space left. So it sucks to do it during the day. I'll redo it overnight. Considering most of what it copied wasn't ever corrupted data, I can't even confidently say I was "never punished", or anything. I guess it's just copying everything for convenience when repairing. We'll see after it probably copies my entire 2TB hydrus database.
(111.31 KB 1238x762 bruh.png)

>>16965 Safe to assume these are ALL false positives? Got it from here: https://github.com/hydrusnetwork/hydrus/releases Version 465 has more than 466.
>>17080 What this mode does is pretty much the same thing as explorer would do, minus all the safety features. So it allows you to operate on a "broken" filesystem, while windows will not let you access the files without "fixing" the filesystem first. As you saw, chkdsk considers it appropriate to delete all files that it cannot recover. I, on the other hand, think that we should at least try to get all files off that disk, broken or not is for us to decide. You pretty much went ahead and did the equivalent to opening explorer, hit CTRL+A and copied it to some location. Just without explorer, because explorer does not let you operate on broken filesystems. I have fairly bad experience with windows chkdsk, so right now we try to get data off of the "broken" (at least what windows thinks is broken) state, in the hopes that chkdsk is just stupid and removes files for no reason. Microsoft probably thought very carefully about the implementation of those tools, so I would not get my hopes up to recover any missing files at all, but it may be worth a shot. This will copy every file you selected off that drive image, maybe even a few more, if the filesystem really is corrupted. Then, you can see if the files that were recovered were just garbage or actually usable. >>17081 This is normal if the file system fills up, and will happen with any modern filesystem. There is sadly no way to fix this, the workaround is simply to "have less stuff". You can look up "filesystem fragmentation". Feel free to cancel, this will take up the same or a bit more amount of space than your original files did, since you are pretty much copying them to a new location. The plan is to have two copies of the data (recovered and your existing data), comparing it (with a program, not manually), removing files that match existing data (you already have those) and letting you decide on the rest that were "recovered".
>>17083 It sounds like there was never any hope to begin with. I thought I was over this feeling. At least I can go through the motions, unlike before, when I felt like laying down and dying. But it sounds like nothing will be any different on the other side.
>>17084 Anon, I wouldn't say that. I just don't want to give you any false hope here, since I don't want you to keep feeling terrible every time you try a new "solution". In my mind, it is always best to treat stuff like this as a lost cause, just because you won't put any hope into it, you won't feel that bad when it goes wrong. To be honest, from my perspective, this is well worth a shot. You've got nothing to lose by trying and pretty much only have to wait, since the imaging was the hard part. Now it's just throwing shit against the wall and see what sticks. Also, I don't want to shit on microsoft for not getting chkdsk right, since I couldn't do it any better. chkdsk destroyed data where I work, and the data was mostly fine in the image-backups done a couple of days prior. There were no further investigations into what exactly happened, thanks to management. Let's not forget that you have a really old OS here, some features just weren't as mature then as they are now, that may include file systems and chkdsk. Again, I urge you to try this at least. I don't want to get your hopes up, but I also really wanna see whether or not this is a chkdsk issue or the drive was genuinely bad. The disk seems to be okay, you did not get any read errors this time around, so it could really be chkdsk not dealing very well with the way veracrypt wrote the partition, or just acting up in general. I will probably be pretty busy the next couple of days, so you can take your time, if that helps.
Ever since v459, my nijie.info downloader returns a 502 error. I looked into this issue and it seems to have to do with the "Range" HTTP-Header. Could this be made optional, for example in the header overrides? Maybe if you set "Range" to the string "None", the functionality would be disabled for that specific site? I speculate that this may also make the fix portable, as the headers are included in the downloader pngs. I tried it with mitmproxy, removing "Range" and it works, if it is added, with any value, the server returns 502. This is not a very important issue in my opinion, since appearantly not many people are affected, but it would be nice to have it fixed eventually.
>>17078 >Right. You might be interested to know there's currently a booru that works off of IPFS and is hosted over several meta-networks including TOR: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/ Their source code is also available so it might be useful. Speaking of the Permanent Booru, here are some page classes, url parsers, and tag search for it. These are based on the onion and its gateway, so you'll need to set up Hydrus to use Tor (domain specific proxy settings when). The tag search isn't perfect as you need to use %20 instead of spaces in tags that have them, and I couldn't do MD5/SHA256 look ups as PBooru takes the hashes directly in the URL and not as parameters which Hydrus can't seem to handle.
(245.40 KB 860x644 consider the following.png)

>>17082 That already happened a lot of times along the years. The explanation given is that the anti-virus profiteers are a mafia asking for developers' money in exchange to greenlight the programs. Since Hydrus may connect to the network in order to download stuff, the anti-virus software red flag it until a review of the code is done ---> It means blackmail money.
>>17088 Okay, I'll take the word of a random anon over the kikes any day. I'll just add it to exceptions.
>>17087 neat.png
>>17068 It's doesn't really work at all for me if I set the zoom to fit the canvas size. The bar won't show up. I have to zoom out to see the bar just like the other guy said and I am on a 1080p screen.
(192.53 KB 800x800 pony - hacker.png)

>>17089 I understand that Hydrus is more than a database and a downloader and might function also as a server if it is your desire to share files and tags, then it is natural that the anti-virus will sound the alarm. Soon OP will show up and confirm that anti-virus software are reporting false positives.
After updating to 466 all images are showing up as black for me. I'm on arch+xfce. This is in the preview and in the viewer, but not in thumbnails.
>>17057 not sure where you will see this first, so ill post in both places. as far as the media scrollbar is concerned, I GREATLY prefer to have it below and always on, I am not using hydrus to watch shows or long videos, for me the videos are essentially just replacing gifs at this point. I love being able to see how much longer they have, and being able to scrub through them without covering a good portion of the video. If there is a way to have a toggle between visible and hidden, on video or below video, I think that would be best, because as it stands now, it's annoying to use to scrub things because it's not always there, and at least for me it's also annoying to have to keep my mouse there to see how long the video will last. I should also mention that the way I use hydrus is I have a 4k screen, so I have the left half as media viewer area, and the right half as either the hydrus thumbnails, or a youtube video, I never have it just fullscreen.
(72.51 KB 2587x542 2.png)

>>17085 Anon, I was gonna complain a lot, but I guess I'll just bite my tongue on it, since I guess I deserve every mistake I made + every step of this process, and I should be grateful I have even been told of and guided through the correct process after all my mistakes. But I ran testdisk again before going to bed, this time from the 5tb external, so I could use said external as an output destination. I woke up to being completely out of ram. I have 32 gigs of ram, and usually with firefox alone I am at 50% usage. But after closing it, I was still at max ram usage, and it's been that way for the hour or so I've been awake. Maybe since the version of testdisk you linked me wasn't the latest (you linked me "7.1", when going on the site, there's a "7.2-WIP"), but I assumed the differences was a stable release vs. a beta or something, so I would be better off with the stable. But as is, since it took 8 hours to copy 200 gigs of data, my computer will be at max ram for 72 hours, which is three full days. I literally can't even perform basic tasks at max ram. I guess I'm not technically at "max" right now, since I'm capable of formatting this post. But when I first woke up it took several minutes for my cursor movement to even turn on my monitor, since it turns off after a few minutes of inactivity. It took minutes to open task manager, this sort of thing. The ram usage isn't even attributed to any program, even though only testdisk is open. So it makes it look like something is very wrong. But that's what I get, I guess.
Can anyone make a LewdWeb Forum downloader? Always tons of leaks there. https://forum.lewdweb.net/
>>17095 I'm able to do this on someone else's laptop for three days, so I cancelled it on mine after 297k files (243 gigs), and my ram usage instantly went down to 13% (~5 gigs used). At no point did testdisk properly display in task manager that it was using all that ram. So this was fucked. But, again, I guess that's just what I get. I should just consider myself lucky since it could always be worse.
>>17095 There's nothing wrong. data recovery is an extremely intensive and time consuming process. Every individual bit and sector of data has to be read and analyzed. If you got 200GB done in 8 hours, that 5TB drive is gonna take a full 200+ hours, minimum. That's 8.3 days. Borrowing a laptop for 3 days ain't gonna cut it.
>>17098 You scared me for a second, but it's only a 2TB image on a 5TB external hard drive; it's not a 5TB image that needs to be copied over. But, I understand. Thanks for not being brutal about it. I understand the mistakes I had to make to get here, and make my horrible situation even worse along the way. The part I thought was wrong was that the ram being used wasn't being attributed to the recovery program "testdisk" I was using, yet as soon as I closed it, it stopped maxing out my ram. The laptop I'm using instead only has 8 gigs of ram, rather than the 32 testdisk managed to max out. So it might slog even more than I expected. But at least I don't have to use it while it's doing it's thing.
>>17099 I can't quite tell you a reason for why it was using up all that memory, since I don't have experience with windows (much less 7) memory management or the way testdisk functions internally. On Linux, this is usually due to your computer caching disk reads and writes. It is quite common for operating systems to use all available RAM to minimize disk access, since disks are so much slower than RAM. This can, under certain circumstances, cause a system to become non-responsive, especially when interacting with FAT filesystems on Linux. While I don't think this operation in particular should be this intensive, >>17099 is correct that data recovery can in general be a very expensive process. Sorry, but short of buying a new computer, the only option you have may very well be to just wait for it to finish. I should also make myself clear, if this does not work, you will have to do this again and again and again, each time with a slightly different approach, but you will always end up with a bunch of data that needs to be read and written, which will take a lot of time on hard drives. This may very well take a couple of weeks. Do you maybe have a raspberry pi or similar lying around that you can use to do this instead of your main machine? I can't help with experience here, I have never done data recovery on anything with windows, I can just try to emulate my process with using GNU/Linux.
>>17100 I do actually have two other laptops. One of them is functional but some of the keyboard keys stopped working, and the other one I caused water damage to the mouse, but seems functioning otherwise. The water damaged laptop is basically a repressed memory of mine. I used to eat over my laptop, then wipe it down with a small towel + cleaning spray. But one day one of the clicks stopped being responsive, cause the cleaning fluid got under it. I poured water on it instead, thinking water drying would be better than the fluid drying. Instead the entire PC shut down. I turned it upside down and never booted it for weeks. When I did boot it again, IIRC neither mouse clicks worked, the mousepad itself kept constantly registering movement, but nothing else seemed damage. I never imagined I would ever damage computer hardware. I chalked it up to my being vulnerable. It's one of the hardest things I can think about. But I still have it, I suppose. I think the 500GB or so hard drive I used to use as a boot drive got formatted and used as a secondary hard drive. But, there was nothing wrong with it, so, I can produce a second laptop of my own if I really have to.
downloads from e621 not working please fix
>>17102 10/10 bug report, would mark won'tfix again.
>>17100 Tbh the 8gb ram laptop has been frozen for a few hours now. I tried to check up on it by moving the mouse cursor, and the screen turned on, but it only displayed black. It's set to turn off the screen after 15 minutes of inactivity (I specifically disabled it going into sleep mode while plugged in, which it is). So for hours it's just been frozen, presumably stuck on the same task. I don't think 8 gigs of ram is enough to have the same estimation for completion as 32 gigs. If anything I should use the 8 gig ram laptop, and let testdisk max out the ram on the 32 gig laptop. I don't know how I'm going to do this. I didn't expect the ram usage to be uncapped. If it matters, the first time I ran testdisk on the 8 gig ram laptop, I tried changing the resolution of the cmd window, which made it glitch and stuff, even when I matched it pixel-by-pixel to another cmd window that I kept its default size. So I searched why, and found I had to set cmd to "legacy mode". Maybe on non-legacy mode it still works, but without uncapped ram usage. But then, that would just make it slower. I might just eat the 3 days of testdisk maxing my ram on my 32 gig ram laptop. But I can't live like this. If I can't boot my hydrus after doing this once, I'll just give up for the foreseeable future.
>>17102 Fix it yourself. The myimouto post parser broke recently, I just went in and changed a single thing in the parser to search for links with the id "highres-show" instead of "highres" after inspecting the new HTML. Now it works again. I don't know if Hydev can update downloaders during updates, but that would be worth looking into. Also, there's a cosmetic typo in the parser name (something like highre instead of highres).
>>16969 >I don't think 8 gigs of ram is enough to have the same estimation for completion as 32 gigs. If anything I should use the 8 gig ram laptop, and let testdisk max out the ram on the 32 gig laptop. Please don't take this the wrong way, but you probably don't have enough background knowledge to make such an assumption. It is really difficult for computers to tell you when they are done in these kinds of scenarios. There is no reason to assume memory usage has any kind of impact, because these etas can change at any time. If you want, you can also run testdisk on a linux live cd, that should allow you to browse the internet next to it at least, assuming the memory usage bug does not happen on there. I can recommend using systemrescuecd, it has testdisk and firefox already preinstalled. The reason it was "glitched" is because the ui does not like it when you resize the terminal emulator. Most applications on windows probably wouldn't like it if you just changed the screen resolution. Maybe you can get some help from the testdisk forums (https://forum.cgsecurity.org/phpBB3/) regarding the memory issue.
>>17106 I understand. I don't have any clue what testdisk is doing. I hope it's just a bug that it maxes the ram on both Windows 7 and 10 so far. I'm reading up a little on systemrescuecd, but if anything I would try that on the 8 gig ram laptop (which has still failed to even turn the screen back on after I moved the mouse cursor on it nearly a day ago). Since my hydrus can't boot anymore, I can in practice meet all my expectations using linux. But if it's truly a memory leak, I would prefer to just leave it doing its thing on another computer entirely. I searched "ram" on the forums and didn't immediately spot any relevant results in the first two pages, but I then searched "memory" and found this thread on the first page of results: https://forum.cgsecurity.org/phpBB3/viewtopic.php?p=34642&hilit=memory#p34642 They were also using version 7.1, but on a 3TB image (as opposed to my 2TB image). But it sounds like they were using "Photorec" rather than "Testdisk". Either way, I don't think there's any hope of the memory leak not happening, since they experienced the same on linux. If it's actually a memory leak, the reply in the thread I linked saying: >Please provide a proof for your assumption of a memory leak. >A memory leak is caused by the lack of freeing memory that is not used anymore. Then it might simply /never/ free the memory it's no longer using, so the closer it gets to completion, the longer it will take. I assume. I don't even know what I'm typing at this point. If nothing else, at least my 32 gig ram laptop was still responsive (even if the ram was maxed) after 8 hours of testdisk running. The 8 gig ram laptop has still failed to turn on the display from shutting off from inactivity for over half a day now. Logically I should at least see if testdisk will make the 32 gig ram laptop as unresponsive as the 8 gig ram laptop before finishing.
>>17107 A memory leak means the program will keep using more and more memory, until it is killed by the OS. Can you hear the HDD processing something? Photorec is a way worse solution than testdisk, but sometimes the only way, since photorec can pull all files off a drive, without using any kind of file system information. That means you get one nice big directory with millions of files and random file names. Systemrescuecd is something you put on a USB drive, boot of and run in "live mode", meaning no persistent user data between reboots. So no browser history, installed programs and such. I just thought you could use that if you had to return the 8 gig laptop, as a way to keep using your computer while testdisk churns away in the other window. Did you only have hydrus on that drive that you wanted to restore? Because your "database" (the part that hydrus needs to start) is actually just 4 files, if these work, hydrus can pretty much tell you which files are missing. What you are looking for are 4 files called "client.caches.db", "client.db", "client.master.db" and "client.mappings.db". They should be in the installation directory. If you restore those with testdisk and put them where the old files are/used to be (do not overwrite!), you should be able to start hydrus, go to database > file maintenance > manage scheduled jobs If hydrus starts, go to "add new work", use "system:everything", pick "if file is missing/incorrect, then move file out and if has URL try to redownload". Click on "run this search", then "add job". When you are done, switch back to "scheduled work" and hit "do all work". Again, please don't go crazy here, these 4 files are all you need for now. After the job is done, you should see what files were missing in the logs, you just restore those in the place they belong. You will probably get an error for each missing file. If you can hear the drives rattling in the 8gig laptop, I would not touch it. If it does not do anything, feel free to cut the power. The issue seems to affect Windows and Linux. Also, fyi, hydrus usually works just fine on linux.
>>17108 Thanks for the patient reply. I learned a lot. I know Windows 7 became unsupported a long time ago. I know I'm now vulnerable to 0-day exploits and shit I'm not educated about even being possible. I tried updating to 10, but my drivers prevented me from doing so, and I didn't understand what to do with the "BlueScreenView" screen telling me which drivers were responsible. I just gave up on updating. But, I know staying on 7 isn't ideal in the present, and it will only get worse. If I can use hydrus on linux, then, there's nothing binding me to windows 7, or any later versions of windows. I'm completely uneducated on linux, besides hearing that if you illiterate run commands you don't understand, there is no failsafe preventing you from wiping your entire hard drive. But if I can have my hydrus on linux, then, that's the main thing. But I don't expect my hydrus to ever boot again. Hearing you specify that it's just four files I need to boot my hydrus kind of made me anxious. It's not the same abject awfulness as realizing the bad sector happened to my database at all, but, it's more like fear. This will effectively instantly tell me whether my hydrus can boot again. Provided I can't let the 8 gig ram laptop finish, I will isolate those files to try recovering them the next attempt at running testdisk. I can hear both the 8 gig ram laptop + the 5TB external (with the ddrescue image of my 2TB bad sector HDD) making noises. So, I won't interrupt them, then, even though when I swiped the touchpad over a day ago, merely trying to check the progress, the screen turned on, but only displayed black, and has still failed to display anything otherwise. The laptop even has a caps lock light indicator on the keyboard, and pressing that doesn't switch the light on. But for as long as I can continue bumming this laptop off of whom I borrowed it from, I won't interrupt it. Thanks again for the comprehensive replies across the board. Even though I had to severely fail myself to end up in this situation, at least I can do better for myself going forward. Still just going through the motions to see what can be done despite my continued mistakes.
I know this might be asking a lot, and it's not exactly Hydrus specific, but if anyone has time on their hands, it would be cool to be able to browse this with Kuroba Ex. There's just no 8chan.moe support, the dev accepts site PRs if anyone can add 8chan that'd be kinda cool https://github.com/K1rakishou/Kuroba-Experimental
>>17108 >Can you hear the HDD processing something? >If you can hear the drives rattling in the 8gig laptop, I would not touch it. If it does not do anything, feel free to cut the power. Sorry for responding to you twice, but I wanted to amend what I said I heard in my previous reply: >>17109 >I can hear both the 8 gig ram laptop + the 5TB external (with the ddrescue image of my 2TB bad sector HDD) making noises. So, I won't interrupt them What I "hear" in the 8 gig ram laptop + 5tb external is both devices being on. I don't hear any distinct noises otherwise. The laptop has an SSD, which I'm not familiar with, since I've never used one before, but the 5TB external is merely on and making a monotone droning sound, I can't hear anything otherwise. This is in contrast to my personal laptop, which has two HDDs in it, which constantly make distinct noises all the time. It sounds like morse code (not that I can recognize morse code, but, just saying, to compare it to something). I went back to listen to the 8 gig ram laptop + external for 30 or so more seconds, and, again, both have power, but they're completely silent otherwise, which can't be said for my personal laptop which has two HDDs in it. I'm not really excited to cancel it, since I can now just isolate four files to know essentially instantly whether my hydrus can boot again. Pretty anxious about it. I can leave it doing seemingly nothing for at least a little while longer.
(87.74 KB 1165x761 agdjawgtwtgdm.png)

>>17105 already tried ,looks like it isn't something "manage gallery url generator" option can fix. sorry, error is in pic.
I had a good week. I fixed some small bugs and brushed up a little UI. Nothing too ambitious, but it should make for a nice clean release before I break for my holiday. The release should be as normal tomorrow. >>17094 Thank you. I untangled all this UI code this week and I feel better about adding options like 'hang below' in future. I have saved your thoughts and will try to enable this again soon into the new year. >>17082 >>17088 >>17089 >>17092 Thank you for this report. It is a shame this is still happening. I am going to explore pyoxidizer in the new year as an alternative to pyinstaller for freezing our exes. Something about the way pyinstaller bootstraps the python environment, and the various registry things it touches, seems to set off these virus scanners' testbeds. And as >>17092 says, since I do a bunch of network grabbing and hosting, folder scanning, and file management, and scanner that looks deeper into my lines of code may get confused. The good news here is we build on Github these days, so I can be even more confident about calling these false positives. It doesn't really matter what my home dev situation is like since all the code and build scripts are online. If official pyinstaller or the Github Actions cloud is injecting bad code into exes, that'd be some real shit, lmao. I'm sorry for the trouble anyway. As a side thing on Windows, there's an advanced setting in Windows Defender, if you poke around in the shield taskbar icon a bit to 'virus and threat protection settings', where you can disable 'Cloud-delivered protection' and 'Automatic sample submission'. There's some thing where any time you run an exe, it gets checked against their online database. Hydrus (and other programs) get a lot of false positives on that, stuff that doesn't get rolled into official Defender updates, so rather than excluding your whole hydrus directory--which I don't feel great recommending to people--just turning that paranoid cloud scanner off may be nicer for your situation. I realised I don't have this info in an FAQ anywhere. I'll write it up nicely in the help so we have an easy place to point to in future.
>>17111 Yeah, I think you can cut the power and try to restore just those 4 files, it seems like it's dead. Also, even if hydrus won't boot (that requires a *perfect* disk image), there are a lot of ways to extract meaningful information from the files itself, so don't lose hope even if it goes wrong (disk image is malformed and the like). That just means the file is not perfect, but it could well be that 99.9% is okay, so you can recover most of that 99.9% and hope the other information was stored somewhere else. Also, I forgot to mention, if there are "-wal", "journal" or "shm" files, restore those ones as well.
(17.42 KB 753x487 Untitled.png)

>>17114 I just cancelled it by holding the power button to force shut it down. After all that time, it only copied 85 gigs of data. My personal 32 gig ram laptop managed 200 gigs during the 8 hours I slept. So, the 8 gig ram laptop was literally doing nothing past the first few hours, since it was just frozen indefinitely. An enormous waste of time. I am really not excited about extracting specifically the files I need to boot hydrus. This is fucked up. This is an instant method of knowing whether or not it's fucked up, even if you say I might be able to recover some of the data even if it's not a perfect disk image. Of the 72 gigs of my hydrus the 8 gig ram laptop managed to copy, I actually just searched the four files you mentioned earlier (before amending it to include a few more), and none of them were present. Maybe the path was lower priority. Or maybe they are FUBAR. Either way, I really don't wan to isolate these files for recovery. I can continue bumming the laptop off of whom I did for now. I will try to run systemrescuecd from a flash drive on the 8 gig ram laptop, and try again. I know running from a flash drive will be slower, but, when the original estimation for completion on my 32 gig ram laptop was 3 days, I hope it won't slow it down to the point of taking 30 days, for example. Testdisk + its output path + the 2TB image is on the 5TB external, so I hope that evades the limitations of running systemrescuecd from a flash drive. I expect testdisk to max out the ram and freeze the laptop again far sooner than it would have otherwise finished copying the entire 2TB image. But, I can do that, for now, to avoid isolating the files, just cause I'm afraid of doing so. I think I understand, though, that memory leaks are too irresponsible for this to ever finish. But I'll try it again, despite logic, anyway.
>>17114 Anon, sorry, can you spoonfeed me on how to use systemrescuecd to run testdisk on my 2TB HDD image? I tried to figure it out myself, but I just wasted a lot of my time. I burned systemrescuecd to a flash drive using "rufus-3.17". Then I booted from it, and used the "copytoram" boot option to store it in ram, allow me to remove the flash drive after it's finished booting, and make it run faster. I tried "startx" to start the "graphical environment", but I didn't understand how to navigate or in any way view my external hard drive, or even the boot drive of the laptop (not that I even want to view that). Before even trying systemrescuecd, I couldn't figure out how to set testdisk to have an output path outside of its own directory, so running the testdisk that came with systemrescuecd didn't seem like an option to me; I thought I needed to run it from the 5TB external, to use said external as an output path. But I couldn't figure out how to open the 2TB HDD image from the same command as running testdisk (the systemrescuecd equivalent of your earlier "testdisk_win.exe <Path/to/your/Image/that/we/can/destroy>"). I could run "testdisk" alone, at which point it tells me my 5TB external is /dev/sdb2. But trying "testdisk /dev/sdb2/drive.img" didn't work. Again, even if I could open the image this way, I don't know how to even set an output path outside of testdisk's boot directory. I need the output path to be the 5TB external, cause I'm copying the entire 2TB image (since I'm too pussy to only isolate the files I need to boot hydrus, cause I'm not excited about confronting the possible reality of them being broken). I think because of testdisk's memory leak, ideally I shouldn't use "copytoram", and should instead just boot systemrescuecd from the flash drive normally, to keep as much ram available as possible. Then I don't know whether testdisk would be faster if ran from the flash drive, or the 5TB external (the external has usb 3.0 speeds, but I don't even know if I'm using the right usb port for that on the laptop I'm using, or if that even matters for testdisk, when the 5TB external is only writing to itself). Anyway, sorry for all the tech illiterate words. I just mean to say I don't know how to use run the "testdisk drive.img" command in systemrescuecd. Can you please spoonfeed me how. Thanks. Sorry.
>>17071 By the way, if you need more examples of this issue, I've managed to scrape together 30 more images that are invisible in Hydrus for me. They all have ICC profiles and display fine in external programs, so that really does seem to be the problem.
>>17115 >>17116 Sorry anon, I can't give you any good instructions right now, I'll try tomorrow. Also, please refrain from dicking around with systemrescuecd too much. You can quite easily break A LOT of stuff, because it's simply not meant to be a distro for beginners, but for sysadmins to recover a broken machine to a bootable state. There are no nice safeguards or anything like that in place to keep you from destroying your data. As you guessed, copytoram is used when you have 20 computers and want to boot them off of a single USB drive. To put it simply, booting from the USB drive will not mean that it works slower, since all the required data is read once and then available in the memory cache. So, copytoram is not required, but you can of course do it if you wish. Also, I would encourage you to try and find the install directory for hydrus to recover the db files. Which version of hydrus did you download - installer or extract only? I'll try to download it in win10 and give you a suggestion where to look based on that. Please try to recover the rest of the data later, if you are up to using systemrescuecd we can skip all the windows specifics and try some other things that should work better. For now, focus should be on restoring the hydrus db.
>>17118 I understand, about my blindly being some sort of a blockhead in systemrescuecd. Especially cause it wasn't even on my own laptop. I can't afford to fuck anything up there. Really irresponsible and selfish of me. I always downloaded the hydrus installer and ran it to update. The first post I made on 4chan's /g/ about my bad sector HDD was dated November 17: https://desuarchive.org/g/thread/84347421/#84353133 So whatever version was the latest on that date is probably what I was on. Also, completely irrelevant, but in looking for that post I managed to find an August 2020 post I made in 4chan /g/'s stupid questions thread, asking if decrypting my HDD was the only way to clone it: https://desuarchive.org/g/thread/77027659/#77030758 I guess it was just one post in a sea of countless, but still. Even though the damage is already done, I did try to find out if I could clone without decrypting my entire drive. Only, not after the bad sector happened...
https://www.youtube.com/watch?v=MrweXsImVhg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v467/Hydrus.Network.467.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v467/Hydrus.Network.467.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v467/Hydrus.Network.467.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v467/Hydrus.Network.467.-.Linux.-.Executable.tar.gz I had a good week cleaning some things up for end of year. couple of scanbar things I polished the new autohiding video scanbar. A bunch of the layout and coordinate detection code of the scanbar and the media canvas behind it is less jank. If you had some flicker or weird mouse-popup behaviour last week, I hope it is better now! If you are a macOS user however, I held back one of the changes. There's a background-erasing hack I put in a couple years ago because without it macOS media viewers went to 100% CPU. I hope this is no longer true, but I'm not certain, so you still have the flag on by default. Please hit _help->debug->gui actions->macos anti-flicker test_ and then try browsing some images and video in a new viewer. Does your client lock up, or is it ok now? If everything is good, I'll remove the flag and you can get some nicer anti-flicker tech too. full list - new scanbar cleanup: - the media container's scanbar and volume control are now combined on the same widget, meaning they now show/hide in sync and faster. their layout calculation is also more sensible. the new controls bar also has a thin border to make it pop better against a background video - improved the way some auto-hide anti-flicker tech on the scanbar now works. it all hides a frame faster sometimes - figured out some new anti-flicker tech to reduce/eliminate a frame of stretch when flicking from a static image to an mpv video, particularly for the first or second time in a session - fixed a bug where clicking the global mute/unmute on an mpv player meant that certain shortcut keys (usually those with arrow keys) would not work on that player again. (it was a focus issue on the button, which then captured some form navigation keys but they had nowhere to go) - brushed up some mouse coordinate testing logic across the program. some linux clients had trouble with the new animation scanbar popping up over mpv, I think I improved it! - fixed another type problem with newer python/PyQt5 on Arch, also in scanbar coordinate testing - fixed some dodgy colours in the scanbar initialisation and volume control border - macOS users: I undid a long-time paint hack on the media container and the static image canvas. Qt is responsible for clearing the background again, which allows me to remove some jank anti-flicker tech. HOWEVER, the original reason for this hack was because without it, old macOS went to 100% CPU whenever the media viewer was showing something. therefore, to be safe, this option is still on for macOS users for now. you'll get a little flicker when browsing. please try hitting _help->debug->gui actions->macOS anti-flicker test_ and do some mixed video/image browsing. does your whole damn client lock up? - . - misc: - the 'file log' and 'search log' buttons are now a new widget class that puts an arrow on the side that opens a menu. the secret right-click menus of these buttons is now exposed for all - fixed a bug affecting some greyscale pngs with ICC profiles--they were coming out pure white due to a colourspace conversion problem - fixed an import problem when PIL could not load a file (due to file malformation) but OpenCV could. this was causing a failed import from the new ICC profile detection code - when the downloader hits a broken image file that cannot be imported due to malformation, the status is now 'error' instead of the incorrect 'ignored' - fixed the duplicate file filesize comparison statement sometimes showing > in one direction and ≈ in the other. it happened when the larger file was between 20/19 and 21/20 times the size of the smaller, just a logic typo (issue #1028) - the trash maintenance daemon is moved from the old threaded daemon system to the new repeating job worker pool. this is the last daemon cleaned up, so I am retiring the old and mostly defunct 'no_daemons' launch argument. a variety of other daemon infrastructure for things like shutdown checks is similarly removed. the program also now waits for the newer daemon jobs to finish working on shutdown - moved most client daemon jobs like repository sync and dirty object save down so they start after the first session is loaded rather than right after boot - if a file is called to regen its thumbnail but currently has no dimension, this is now a no-op rather than an error. in the situation where users force thumb regen before metadata regen and encounter this, it is sorted out later when the metadata regen recognises new dimensions and reschedules the thumb regen - added an extensive user-written guide to the --db_synchronous_override launch argument to the launch arguments help page. it is possible and safe to run the program with synchronous=0 as long as certain caveats are obeyed. thanks to the user who figured this out and wrote it up - the downloader engine now discards source urls in an import job if they have the same domain as any existing primary url. this will ensure that if a booru has a link back to itself as a source url, when the 'source' is really an alternate rather than a dupe, it won't be added in hydrus as a known url for that imported file - misc cleanup in downloader system and file/search log UI - fixed a type bug in the file and search log 'import from png' action. if you have existing pngs previously exported from here, they will import ok now - refactored the various hydrus compression code to a new HydrusCompression file - exported serialisable data pngs such as from file or search log that hold simple Strings now always compress the data before embedding it in the png. existing pngs that hold uncompressed strings should still load ok - the payload in an exported png is now always compressed, and the payload description always states the uncompressed size - sped up client shutdown when network traffic has been paused the whole time and a repo sync job might have wanted to run. these jobs also do not hang on a thread worker if network traffic is paused, but they should wake immediately when it is unpaused - the hydrus login system is now resistant to connection failures; previously it was getting hung up and jamming the whole hydrus sync system when a server was down - .
[Expand Post]- client api: - added GET /manage_database/mr_bones to the Client API. it returns a JSON Object with the same numbers used in the _help->how boned am I?_ dialog - incremented Client API version to 23 next week I'm on holiday for a week, so I'll be back working on Saturday the 1st. I want to grind back into getting multiple local file services done. Thanks everyone! 𝕸𝖊𝖗𝖗𝖞 𝕮𝖍𝖗𝖎𝖘𝖙𝖒𝖆𝖘!
(8.11 KB 512x162 lewdweb.png)

>>17096 Didn't know about this website, thanks, anon! Here's a downloader I have thrown together. A few remarks: - Don't forget to log in and transfer your cookies to Hydrus; a bunch of content is hidden otherwise - If you want to download a whole thread (not just one page), you'll need to use a gallery (download -> gallery); if for example you wanted to download https://forum.lewdweb.net/threads/your-thread.0000/, you'd only input your-thread.0000 Hope this works for you! I think everything should behaves as you think it should, but let me know, I probably didn't test all cases and that website is kind of a mess. Devanon, if you've read all this, please feel free to add this to the repository if you think it should be there; you've done that for me last time I made a downloader (thanks again!).
(1.03 MB 1366x768 917.png)

(70.64 KB 540x632 Screenshot_20211222_201216.png)

(2.02 MB 2628x1915 mlp 000057 - merry christmas.jpg)

>>17120 >- brushed up some mouse coordinate testing logic across the program. some linux clients had trouble with the new animation scanbar popping up over mpv, I think I improved it! Now it works perfectly. Thank you so much and Merry Christmas!
>>17119 Sorry, couldn't make it either today. And tomorrow my schedule is absolutely full, for obvious reasons. I hope to get some downtime on Dec 25th to take a look.
>>17123 It's no problem. I'm grateful you've been helping me at all, let alone this much. I have access to the borrowed laptop for the foreseeable future, so there's no rush. Anything is fine. I'm also just not excited to isolate the files necessary to boot hydrus, so time between that at least makes the present easier for me.
>>17120 I'm still having the same problem as >>17093 using Arch Linux + KDE v467, 2021/12/23 19:40:22: Exception: v467, 2021/12/23 19:40:22: TypeError: arguments did not match any overloaded call: QImage(): too many arguments QImage(QSize, QImage.Format): argument 1 has unexpected type 'memoryview' QImage(int, int, QImage.Format): argument 1 has unexpected type 'memoryview' QImage(bytes, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(bytes, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(List[str]): argument 1 has unexpected type 'memoryview' QImage(str, format: str = None): argument 1 has unexpected type 'memoryview' QImage(QImage): argument 1 has unexpected type 'memoryview' QImage(Any): too many arguments Traceback (most recent call last): File "/opt/hydrus/hydrus/client/ClientRendering.py", line 344, in GetQtPixmap return HG.client_controller.bitmap_manager.GetQtPixmapFromBuffer( width, height, depth * 8, data ) File "/opt/hydrus/hydrus/client/ClientManagers.py", line 206, in GetQtPixmapFromBuffer qt_image = QG.QImage( data, width, height, bytes_per_line, qt_image_format ) TypeError: arguments did not match any overloaded call: QImage(): too many arguments QImage(QSize, QImage.Format): argument 1 has unexpected type 'memoryview' QImage(int, int, QImage.Format): argument 1 has unexpected type 'memoryview' QImage(bytes, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(bytes, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(PyQt5.sip.voidptr, int, int, int, QImage.Format): argument 4 has unexpected type 'float' QImage(List[str]): argument 1 has unexpected type 'memoryview' QImage(str, format: str = None): argument 1 has unexpected type 'memoryview' QImage(QImage): argument 1 has unexpected type 'memoryview' QImage(Any): too many arguments File "/opt/hydrus/client.pyw", line 11, in <module> hydrus_client.boot() File "/opt/hydrus/hydrus/hydrus_client.py", line 217, in boot controller.Run() File "/opt/hydrus/hydrus/client/ClientController.py", line 1573, in Run self.app.exec_() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1932, in paintEvent self._DrawTile( dirty_tile_coordinate ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1792, in _DrawTile tile = self._tile_cache.GetTile( self._image_renderer, self._media, native_clip_rect, canvas_clip_rect.size() ) File "/opt/hydrus/hydrus/client/ClientCaches.py", line 587, in GetTile qt_pixmap = image_renderer.GetQtPixmap( clip_rect = clip_rect, target_resolution = target_resolution ) File "/opt/hydrus/hydrus/client/ClientRendering.py", line 348, in GetQtPixmap HydrusData.PrintException( e, do_wait = False )
[Expand Post] File "/opt/hydrus/hydrus/core/HydrusData.py", line 1190, in PrintException PrintExceptionTuple( etype, value, tb, do_wait = do_wait ) File "/opt/hydrus/hydrus/core/HydrusData.py", line 1218, in PrintExceptionTuple stack_list = traceback.format_stack() v467, 2021/12/23 19:40:22: Failed to produce a tile! Info is: 082d5c1889601170444166afa4bc78c9ce1b6f5a6224aa1f46dc54398e1c7c78, (700, 989), PyQt5.QtCore.QRect(0, 0, 700, 989), PyQt5.QtCore.QSize(167, 236)
Is there some way to reset the gui to default? Pop up windows are stuck in fullscreen as far as I can tell and my eyes get raped by white light every time I need to use one. I don't know if this is important but I manually moved my db over to a linux system when this started happening. I compared the behavior to a new Hydrus install on the same system and popup windows work fine on that one - they start out as small windows and can be maximized and unmaximized without issue.
(6.03 KB 512x109 reddit-2018.09.21.png)

got a request to anyone who knows how, I have no idea how to make a parser, there is a reddit parser that I think is from 2018, it works well for the most part, however I don't think it can handle multiple images at a time, and it also I think doesn't handle video from reddit either, I attached what i'm using, and use this reddit ClipStudio as a good, non porn test bed, it typically has many videos and multi images without going to far.
>>17120 >- added GET /manage_database/mr_bones to the Client API. it returns a JSON Object with the same numbers used in the _help->how boned am I?_ dialog - incremented Client API version to 23 Thank you so much! Merry Christmas to you too!
(35.00 KB 564x377 1.png)

(9.81 KB 526x74 2.png)

It is my understanding that you have two drives now, one of which should be easily "disposable" (meaning we can, theoretically, delete all files from it). As an overabundance of caution, please use the disk that contains no data which cannot be recovered to do this. I have never had ntfsfix destroy anything, nor have I ever heard about any lost data by ntfsfix alone, but I don't want you to be the first. So, you have two drives with 5TB, one should only contain your image (and maybe some recovered data), the other contains a lot of other stuff as well. Please use your new drive for this, or backup your stuff to the other drive before trying. Do the following: - boot systemrescuecd with default options (copytoram makes lsblks output more complicated) - once up, type "startx" - Open a terminal emulator (start menu) - Type lsblk -o name,size,label,fstype - mind there are no spaces in between the arguments after -o . That command should give you a list of connected disk devices (USB, SATA, ...) with lots of info. As before, find your disk that you want to store *recovered files* onto in the list. fstype should be either ntfs or something to do with fat (eg. fat, vfat, exfat, fat32...) It is my understanding that the disk you want to use to store the data is also the disk that contains the image. If not, tell me and I will expand on this. "sdb" is the name of the entire disk, "sdb1" would be the first partition of that disk. "LABEL" should be equivalent to your drive name in windows. You can try the following command to mount the windows disk: mount /dev/disk/by-label/<name of the drive label> /mnt If your drive has no name, check the output of lsblk and just use mount /dev/<The partition that contains your filesystem, e.g. sdb2> /mnt It may tell you the drive contains an "unclean file system" (exact wording!). If, and only if, it tells you that, run umount /mnt (I forgot this in the screenshot) and then ntfsfix /dev/disk/by-label/<name of the drive label>. ntfsfix should not ask you about anything here. It will tell you that metadata is kept in Windows cache, attempt to "fix" this and tell you the drive has been processed successfully (see screenshot). Next you run mount /dev/disk/by-label/<name of the drive label> /mnt again and it should run without any output (meaning it was successful). If ntfsfix asks you for verification on anything, hit CTRL+C, take a screenshot and post it here. If you run ls /mnt, it should show all of the directories on that drive. Run mkdir /mnt/recovered_data, then testdisk /mnt/drive.img. As the output directory, you probably need to hit [code}..[/code], until you are in the directory /. From there, just go to mnt and recovered_data, confirm and you should be golden. I downloaded hydrus on windows, the files should be located in your install directory (was per default in C:\Hydrus Network\ for me) in the db subdirectory. Good luck!
(3.25 KB 512x92 DeepDanbooru.png)

>>17113 anybody know how to add in a new file lookup script? I've installed https://gitgud.io/koto/hydrus-dd and have no idea how to install this file lookup script. I tried using the method of importing new downloader scripts, but that didn't work
>>17130 network > downloader components > semi-legacy: manage file lookup scripts
(36.26 KB 233x226 4n4rim8tk4931.png)

>>17131 Thank you anon!
>>17129 Thanks for formatting this post for me even though it's christmas. Sorry. Thanks. It took a really long time to finish, but I copied everything from the "db" subdirectory from the image, except the "client_files" folder. It copied 913 files, even though when I put the non-standardized image of my bad sector HDD onto the replacement hard drive, and ran Windows "chkdsk" on said replacement HDD, it says there are 293 files in that same directory. Actually, after typing that, I am a bit concerned. I managed to blindly navigate my way to looking at the folder, and it says there are only 293 files there. So I guess the 913 displayed in testdisk is the amount of broken parts it was doing something with. I thought I would've had hundreds of chances to boot my database. Instead I think it only produced one copy. My post-chkdsk-replacement-hdd has the four files you mentioned to cherrypick visible in the same directory. So, I don't know. I didn't have to do the "if and only if" part, about the "unclean file system", since I never got the popup. I don't know how to take screenshots on linux, so I just left it plugged into the 8 gig ram laptop I borrowed, with the testdisk window still open. The entire folder it copied was only 6 gigs, but it took so long. On my post-chkdsk-replacement-hdd, my "client.caches.db" is 3.8 gigs, my "client.master.db" is 1.8 gigs, and my "client.mappings.db" is 494 megs ("client.db" is only 4kb). So I guess testdisk was copying those over hundreds of times, all just to produce a single output file. I really don't want to try using these, but, it's done, I guess. For what it's worth. Judging by this, I imagine every corrupted file will have to copy itself over hundreds of times, just in the hopes of producing it uncorrupted. This is so fucked up. But, again, so far I still have systemrescuecd open on the laptop I'm borrowing, with testdisk still open, even though it finished, only cause I didn't know how to take a screenshot to post it.
>>17133 Anon, I can't quite tell you why there are differences between file counts in testdisk and after copying, but that doesn't really matter, as long as you got those 4 files. You can only restore these 4 files in the state that they are in the image, so you can boot off of that. There are no "other versions" or something like that. If the state that the files are in is broken, you can then try to recover as much data as possible from each of these files individually, to get a consistent image from the database to boot. Let's not worry about these scenarios for now and just try to boot with these 4 files. >I didn't have to do the "if and only if" part, about the "unclean file system", since I never got the popup. That's not bad at all, I just had it every single time, so I included it, if you had the same issues. Nothing to worry about if you didn't have it. The reason it took so long is because the drive needs to physically move it's head from "reading the image" to "writing new data". Hard drives are really bad at this, so reading and writing from/to the same drive is terrible for performance. That is also the reason why your OS tries to cache as much data from the HDD as possible. Testdisk probably had to read the data from lots of tiny seperated areas all over the image, since they are scattered all over the place in there. That's just what happens when your disk becomes so full. You can temporarily fix this by defragmenting the drive, but we'll live with it for now. You don't need a screenshot, as far as I am concerned everything went quite well. Now, shut down everything, wait for the power to be gone before unplugging everything. You can probably do this next step on your own laptop, since it's more private there ;) Download and install hydrus (preferrably the same version you had installed before), go to C:\Hydrus Network and rename the db directory to db.bak, if it exists. Disconnect your network, so none of your subscriptions get saved before we have a final directory. Create a new directory, call it db and copy the recovered db-files over. Start hydrus and in the best case scenario, you will see your session and metadata and it will spit a bunch of "file not found" errors at you. Save your session and start with a new empty one. It could also be that hydrus tells you that some locations do not exist and that you should pick the old location where files are located, before even booting. You should have your client_files directory located somewhere on the post-chkdsk hdd, correct? If yes, hit "add a possibly correct location" and navigate to the client_files directory. If you did not change it, it will probably be in C:\Hydrus Network\ If that worked and hydrus boots again, let me know and we'll see from here. Also, don't get discouraged by a "disk image is malformed" error happening randomly - we'll try to recover most data from the database if that happens, which may give you a small amount of data loss, but nowhere near as bad as losing an entire file. Hydrus includes some nice guides for that situation also, so you get someone actually qualified for this telling you what to do! Good luck anon!
(14.03 KB 516x434 Untitled.png)

>>17134 It didn't boot. I can't believe I wasted so many years thinking I was doing something. My life is a joke. For what it's worth, there were a few things different, so I couldn't just follow what you said. First of all, my "client_files" directory was located in the very "db" folder I renamed to "db.bak". But because you mentioned "if" I didn't change the directory, I thought it was fine if I just left it there, but put everything else in a "db.bak" folder. Secondly, testdisk didn't produce a "client.db" file, so I copied the one from the post-chkdsk hdd to use. I also copied the entire contents of the "db" folder otherwise (besides the "client_files" folder). For what it's worth, everything in "db" (besides the four files I ran testdisk for) was from the post-chkdsk hdd, not the testdisk copy. Thirdly, I had already reinstalled the latest (and probably not the same) version of hydrus shortly after I ran chkdsk on the replacement HDD (sorry, I tried looking for my mentioning this in a previous post, but it seems I somehow didn't. Sorry). But that's everything, I guess. I don't see the point in trying anything else, if this is the result of waiting probably 12 hours for testdisk to copy basically just 4 files with a total size of 6 gigs. All corrupted data will probably turn out the same. I can't even browse anything, since my database only has their image hash as a filename. have nothing left.
(21.16 KB 561x399 screenshot.png)

>>17135 Again anon, I already told you, that is just the first step. There are still lots of ways to recover even a "dead" db, there is still a lot of stuff to try before calling it quits. Testdisk should almost certainly have a client.db file. Would you mind going back and having a look at your db-directory again? systemrecuecd should allow you to take a screenshot in the start menu under Accessories > Screenshot. Please do as before, navigate to the db directory of hydrus in testdisk and take a screenshot of the files contained within, just like I did, okay? I don't think that there are any important files besides the four db files, so keeping the rest of the chkdsk'd files should be good. Just make sure that you move any files that have a db in them (.db-journal, .db.wal, ...) to the db.bak directory. Please don't try to use the latest version, since there have been lots of updates to the db, hydrus will probably try to auto-update to the newest version, which is just further hassle we should ignore for now. If you want, you can take some time too, just do it when you feel like it.
>>17136 This is irrelevant, but connecting to the internet and opening firefox on systemrecuecd was the first time I ever did something for myself on linux. Following the instructions on it before just felt like I was in a danger zone. I know I didn't actually accomplish anything, but even though it sounds stupid, I'm surprised it "just werks". I thought I was gonna crash my computer and set it on fire or something. Again, my boot HDD only had hydrus on it, since I had moved all my private data and etc. to my second HDD. I couldn't update from W7 to W10 cause of drivers issues literally preventing me from doing so, since the update fails. Everyone calls me retarded for still being on 7, and I know it'll only get worse. Again, it's irrelevant, but, like, baby steps. But yeah I did the thing. I don't know how I can undo my updating to the latest version of hydrus on my post-chkdsk hdd. Maybe I can just copy everything other than my "client_files" folder and replace everything in the post-chkdsk hdd hydrus path with it, unless hydrus extracts to %appdata% paths or something weird as well.
>>17136 >>17137 I uploaded six images, but I guess 8chan only allows five per post.
>>17138 Okay, so the client.db is actually missing, both on the image and on the ckdsk-disk. It seems like veracrypt probably did not write it correctly anymore. We will attempt to check if the file is still recorded in the metadata of the filesystem, it just might be. I can't promise this works, but why won't you go ahead, startup systemrescuecd and do the following: Mount the drive containing the image as before. Then, you use this command: losetup --partscan --find --show /mnt/drive.img - that will create a "virtual disk" (loopdevice) and print the name of the device created. Run the command parted /mnt/drive.img print - take a note of the partition number that your data partition has. Mine was 2. Now, you combine the name of the created loopdevice (e.g. /dev/loop1) with the number of your partition (e.g. 2), you get /dev/loop1p2. After that, you run ntfsundelete --scan /dev/loop1p2 --match "client.db". That will print you all files called "client.db" that could be undeleted and percentage recoverable. Note, this will not do any recovery by itself. Let me know the output, we can into recovering after that.
>>17139 I feel horrible. I feel the same familiar feeling. I feel like death. But I will do it.
>>17139 It didn't actually do anything yet.
>>17141 okay, please add the option --force, that should not do anything too bad, it's only scanning...
>>17142 I am really not ok. I'm going to continue living. But I had one chance after realizing the HDD went bad, and I ruined it.
>>17143 Well, you should probably wait for devanon, he can probably tell you what exactly can be recovered without a working client.db. I don't want to make any promises, but judging by the name, tags and mappings should be mostly okay even without the client.db. Hold onto that image, maybe I (or someone else) can think of something in the future. In the meantime, you should probably think about a good backup strategy to ensure this never happens again. Take care!
>>17144 Thanks for all the help. I really do appreciate it. But I wish I had never been vulnerable.
>>17130 On that note, can these image downloaders be malicious? Are they just encoded python scripts? How can i decode one and check the code before installing?
Everybody having 403 with rule34.xxx (and also captcha in normal updated browser) right now, or it's just me? ua: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0
This link doesn't work in the URL downloader and I don't get why. It's just a tumblr blog with a custom domain. https://k009comics.com/post/670791457723006976/update
>>17144 I copied everything else from the HDD that wasn't my hydrus folder, and it saw 1993 failures. Up until this point, I hadn't seen testdisk display a failure before. I would've posted a screenshot, but I didn't know that hitting the up or down arrow key (or probably literally any key) would make the amount of successes/failures go away. I just wanted to show minimal private info in the screenshot. But the point is I think literally all corrupted data will fail to be restored because I decrypted the hard drive beforehand. I keep going over and over in my head on if there was a possible reality where I wouldn't have decrypted it beforehand, due to thinking it was the only way to clone/image the disk, due to when I first looked for a method when upgrading my boot HDD to a bigger one (>>17119). I kept saying this before, but I was vulnerable because I thought I was of sound mind, and doing the best for myself I possibly could, even if that seems absurd in hindsight. If I were afraid of my decision making, I would have been afraid to fuck with it without first doing as much research as I could (again). I don't think the same places I looked, that only showed me the same non-standardized solution that didn't even work at the time would have led me to ddrescue only 16 months later (how old the drive was when it had the bad sector). But making the 8chan hydrus post about it obviously did. I don't know what I'm saying. Me bringing up testdisk failing to restore things outside of my hydrus folder isn't even immediately relevant anymore. Even if these failures were within my hydrus folder, I already realized I can't boot my hydrus, and I already have the methods to try what I can. I actually stopped formatting this post on this paragraph, but 8chan saves it, so I cobbled together finishing it. Not really saying anything. I wish I were never vulnerable.
>>17149 I can't quite follow either, just keep in mind that you can't change the past, so please stop beating yourself up over it. You may very well get some data back still. For now, let this be a lesson to read up on stuff and doing what it tells you to do. The hydrus manual told you in no uncertain terms that you should backup all of your data regularly, or you will lose your data and feel like shit afterwards. The veracrypt manual additionally covers some special things you should do to keep your data safe (header backups, for example). If nothing else, take a weekend, read up on how to back up your data, and do it. Not just hydrus, take a long hard look on stuff you use every day and ask yourself if this stuff is truly protected. Don't take any shortcuts, you saw where those lead. Especially if you want to get into linux and run commands you don't fully understand, you *will* lose your data again - even if there was no malicious intent, accidents can and will happen to everyone. I nuked a production server the other day, because instead of typing /dev/sdaa, I typed /dev/sda and hit enter out of habit. A functional backup can be the difference between total disaster and mild annoyance. Please don't beat yourself up over the mistake you made in the past. Instead, look at it as an investment. I have yet to meet someone that says to me that they never lost data because they had a functional backup their entire life. The story is always either "I remember losing my data, that's why I back it up" or "I don't back up anything because I have nothing to lose". The second group is always the ones that ask me how to back stuff up properly a couple of years later. You are not any worse than any of those other people that lost their data, no need to feel bad about yourself. You just did what everyone else did - including me. In 5 years time, when your next drive dies and takes a copy of the stored data with it, make sure you can sit there, being grateful that you learned your lesson 5 years ago, and you just need to get a new drive and copy the data over again. You already paid the price for this lesson, now you just need to make sure you won't fail and pay again.
>>17144 I really don't need to continue making posts about how decrypting the veracrypt-encrypted bad sector HDD made all the corrupted data FUBAR, but I remember my veracrypt install being corrupted, and notepad++ install being corrupted. After using testdick to copy everything but my hydrus folder from the bad sector HDD, in its output my veracrypt install directory is completely blank, and my notepad++ install folder is completely missing. So there was never any hope to begin with. Decrypting was the same as overwriting all corrupted data with garbage. Accidentally formatting my entire boot drive would have been less damaging than decrypting it. >>17150 Thanks for the empathetic post. I was literally in the midst of bitching and moaning about my situation again when it updated with your reply. I know I should count my blessings. I do appreciate the empathy. But I hate that I was vulnerable at all. I wish I realized I were vulnerable and my decision making couldn't be trusted. I wish I was too afraid to do anything after realizing the bad sector happened at all. In the past I've been too afraid to deal with being scammed when buying a used product online. I always waited at least a full day before replying, because I couldn't cope with their dishonesty and abuse. I've been too afraid to even wake up in the morning, because I had no frame of reference for happiness, comfort, anything. I knew if I had to make decisions, I might end up hurting myself. I did, many times. I felt my being alive was a mistake. But, I thought I was past that. I thought I understood what it meant to protect something, and be confident in my decision making. I thought I was acting rationally, instead of in a reactionary way. I was wrong. I just wish I had never been vulnerable. I think I was biting my tongue on saying the following, but, losing all my sorting I spent years making, losing my flawless archival of data, it made me realize that none of this shit ever made me happy; only a real person can make me happy. I don't mean it in a shitty platitude way of "get a girlfriend", I mean it as I say. The only reason I could even bother to archive anything, even sort anything past that, was because I thought it was making me happy. But, it wasn't. I did feel that it took a toll doing anything for my archive. I did feel that a real person made me much happier than anything in my archive ever could. But I never realized that my archive couldn't make me happy at all, until it was compromised forever. But, obviously not only then, since I was still vulnerable enough to trust my decision making after I lost it, which made all corrupted data irrecoverable forever. It was only trying to cope with and understand why I couldn't do well for myself that I realized it never made me happy, so I had no frame of reference for what happiness was, so I could never be trusted to do my best to protect it. It's just stupid rationalization bullshit. I'm still archiving. But at best I'm just preventing the bad feeling of realizing media I was interested in became forever lost. There is no happiness, for me, when I'm alone. It sucks to lift your fingers only to prevent pain. Sharing lost media with others is validating, but even if that made me happy, it's still not the lost media itself making me happy. It's just a fucked up situation. I would've preferred losing my HDD with my private data. I wish I had never been vulnerable. I was trying my best. but I didn't understand that I couldn't do well for myself.
>>17151 Well anon, this is getting into territory that I am really not qualified to talk about. I know a bit about computers and linux, I can't talk about the meaning of life or anything, especially not here. Humans are fundamentally non-logical beings, there is no such thing as objective happiness either. Seeking such a thing will inevitably lead to you being disappointed. Instead, you should probably try to get a goal in life, and work towards that. Happiness will follow. If you don't feel happy about something, stop doing it and do something else. If you think about hurting yourself, or worse, don't. Never solve temporary problems with a permanent solution. Please try to get some help from someone that knows what he is talking about, I cannot help you with this at all, unfortunately.
>>17152 Yeah, thanks. Sorry. I was just saying that, for me, the problem was more my being convinced my decision making was sound than anything else.
For a while now I'm getting a fatal error upon startup about the python module (?) shiboken2 not being able to load libjsoncpp.so.24 (on Artix Linux). I'm guessing this is because jsoncpp updated recently and the module is using an old version. Do I just have to wait until that module updates? I can post a log if necessary.
>>17151 I feel like I'm reading something written by chris-chan.
>>17155 Ok. Nice criticism. Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime. Everyone says to backup, but the problem was I couldn't make the decision to backup for myself. My making the corrupted data irrecoverable after realizing it was corrupted was due to my not doubting my decision making. I said that already, but, I say it again for no reason. There's nothing else to add. I think I understand that since decrypting my hard drive somehow purged the references for each file, the corrupted data left is just garbage. I thought I was safe to decrypt. I was wrong. It was the worst thing I could have done.
>>17156 bruh, this isn't some high minded debate where saying 'uh, actually, that's not a legitimate criticism' means anything. We're on an nepalese artisanal mining website in a thread for a furry porn downloader. Your post legitimately reminded me of the way chris-chan writes and frankly you're living up to type.
>>17157 elaborating = debate ok
>>17158 >>17157 >this isn't some high minded debate >isn't [...] debate
The API file search sort is broken when I try to set the file_sort_type to random(4). Not only is the result order static every request, but the result set itself, while not sorted by the default type, it noticeably not random (files added in succession often neighbor each other). I know I could just randomize the resulting set myself after parsing it, but I think it's still worth reporting. As for feature requests, added/modified timestamps for API /file_metadata would be appreciated (assuming no extra queries are needed). And please tell me if I'm missing a way to find a file in the GUI by the id I get from the API.
any frens with time able to make a minimal cli based hydrus viewer? like something that just displays some basic search and filter options and then opens a minimal file list, similar to ranger or lf cli file managers. Then you can use uberzug to show the image when you use vim keybinds to scroll up and down and highlight over the files.
>>17144 I went back and tried Photorec, searching only for .sqlite files. I installed "DB Browser for SQLite (DB4S)", and filtered by filesize, and started opening them. There are four 563,764 kb .sqlite files. Viewing them in properties, it says the size is 550 mb. None of them can open, because the "database disk image is malformed". Even though I'm tech illiterate, I assume that veracrypt decrypted in 550mb chunks of sorts, which randomly breaks apart anything that didn't coincidentally fit within a 550mb chunk veracrypt created. Maybe among the smaller .sqlite files are the leftover pieces of my client.db. But either way, Photorec can't recover my client.db. Maybe it can recover the corrupted media in my database if it's small enough and got lucky. Or maybe veracrypt decryption completely ruins all corrupted data regardless of how small it is. I think my client.db is just lost, and I assume the 2,000 or so files I lost in my hydrus otherwise is as well. My only hope is that the other .db files that didn't get corrupted can leave me with something more than a 2TB unsorted hoard, which now can never be trusted as being a complete archive ever again.
>>17162 Please wait for devanon to respond before jumping to conclusions on the extent of the damage. I can't comment on how veracrypt does things, but I think it's more complex than that. Photorec will recover anything it thinks is an sqlite file - that does not mean that it finds anything useful or complete. You may just be looking at literal garbage. I'll be busy this weekend, so we might as well wait for devanon to respond. After we know what can be done without a client.db, we can try to restore the other 3 files.
sleuthkit has a shit ton of tools for this, its just like going through a regular disk but with blk attached to everthing, ie. blkls, blkcat, blkstat etc. blkcat will give you a hexdump of an individual block, ideally the file would be recognized and you would get all blocks needed from blkcalc but even if it fails you can just piece them together manually by sifting through it looking for a known marker, since you know they are all images you can just get a dump of every block incrementing by one until you get a match for a series of '00' or 'FF' or 'E' 'N' 'D' which is what you usually find at the end an image file, append the hexdump from the starting and ending block to an image header and open it to see what you get, then just play around in a hexeditor to try and fix any distortion if any and repeat until youve gone through all blocks Im not a forensic expert but I know you can recovery a shit ton from a raw image, just takes a stupid amount of time the deeper you need to go
>>17164 This is so fucking depressing, but hearing it's not necessarily absolutely hopeless makes me feel a little better. If I can recover just the one file, just the client.db, it would be enough. I could try bumming the ~2,000 random images lost from other people's archives. If I just knew that you could copy a veracrypt-encrypted hard drive without decrypting first, all of the corrupted data would've just been one step removed from being fixed by any program. Now I have to get grey hairs at even considering whether or not recovering a single file is possible. If I were to tell my data loss horror story to spare anyone this fate, I would tell them to use ddrescue to create an image, and testdisk to copy their data, without decrypting or ever otherwise rewriting corrupted data first. When I was innocently looking up backup solutions on my own, the one program every article recommended was a shit proprietary one that couldn't accurately copy veracrypt-encrypted hard drives. Even posts explicitly mentioning veracrypt would recommend this shit program, then people would have to respond saying it didn't work. If I looked up backup solutions using corruption/recovery-related keywords I probably would've found the right programs. But I didn't have the mind to doubt my decision making after the corruption happened to me, even though I was so lightheaded I could vomit. So I made an otherwise easy fix into the worst case scenario. I feel outside of my own person. I don't know how I can confront these being my own actions, and my own reality. I hope this never happens to anyone else.
>>17086 Thanks! By your report I set proxy and delete Range header then nijie downloader works like a charm subscription works without problem too
Dev, is Hydrus fully compatible with Python 3.10? Manjaro is saying something about manually updating your AUR applications to Python 3.10 once their next round of updates drops, so I was wondering if it's safe to do this with Hydrus.
Hey, Happy New Year! I am back from my holiday. I had a good time, looking forward to getting back to things. I'll try to catch up with this thread today and maybe into tomorrow. >>17072 Yeah, not yet I am afraid. At the moment hydrus assumes it can determine the mime/filetype of a file from content alone. In future I will break this and make it so hydrus can store arbitrary data with an arbitrary file extension that it will remember. >>17073 AVIF is one of the new image types, right? Do you have a couple examples you can post/point me to? I want to support this stuff as soon as PIL, OpenCV, or another nice easy python image library does, and if I can get some examples lined up before then, that would help. >>17076 I am afraid I do not know anything about this vulnerability, so I can't too talk cleverly, but a brief search says it is a Java developer logging system? So devs can see what you are doing with their program? I think I can say we have no problems here since I am A) in python, and B) there's no spyware in hydrus that phones home. I do not know and cannot know anything about what you do with the software unless you tell me yourself.
>>17074 >>17075 Thank you for these full comments. I am always short on development time, so I am afraid most of my responses to longer lists of feature requests is 'thank you, it is on the todo', but given that I'll quick a quick answer to each: >Add/invert tags based on search Yeah, I want this. And as you say I'd like 'sync tags based on search', where it will continually add tags based on a metadata and keeps up with changes. I'll be writing a 'metadata logic' object this year that will take us closer to this tech. >flag files That's an interesting idea. We've had trouble with 'tagme' spam getting parsed from boorus, but this would be a hydrus solution to that same problem. >db backup lock This is actually advanced, but you can do it now with the Client API https://hydrusnetwork.github.io/hydrus/help/client_api.html#manage_database_lock_on It is not pretty yet! >content lock Nice idea for 'password to show', and the read-only boot idea--that's long been a thought, but I need to rewrite a whole heap of UI code first. >Interface Yep, I still need to work on this. We have a shit ton of legacy code (much from wx, pre-Qt) and I've never been a good UI designer. I'll keep working, some parts are slowly improving. But I can't promise much, my brain doesn't fit well into this work. >Removing/Adding tag Thanks, I'll add an option for remove confirmation on local tag services, and for pending rescinds. >Unnecessary junk Yeah, this is another long term plan. Much of it is only like 10KB of python code, so the bloat vs loading ffmpeg isn't a big deal. But I get your point about having a clean codebase and user workflow. As I put more time into the Client API I want to start working on some sort of addon system too and externalise what I can. >Question about rebuilding hydrus Yeah, you need all your client*.db database files and media files (fxx folders in client_files). Install is disposable, thumbs can be regenerated. I recommend backing up everything anyway. In limited circumstances you can recover if your client.caches.db or client.mappings.db files are missing/broke, particularly if you only sync to the PTR, but I strongly strongly recommend against planning for this. Just backup everything, you'll save a headache and many hours of CPU work in the long run. Lots more info on this topic here: https://hydrusnetwork.github.io/hydrus/help/database_migration.html >Tag spaces as individual tags That's a really interesting idea. My experience with siblings and parents has been that they seemed simple going in, but it has been logical hell all the way down. I have ideas for more complicated replacement algebra, but there's still a bunch of work to do on the basics before I want to stretch again. Next step I want in this system is namespace siblings, so you can rename all 'creator:' tags to 'artist:' and so on. We'll see how the larger algebra I apply here works out IRL.
>>17077 >>17091 I have heard this is fixed in 467, let me know if it isn't for you! >>17078 >>17087 Thanks, cool stuff. >>17086 >>17166 Shit, thank you for this report. I'll play around with this and see what I am doing wrong with Range here.
>>17093 >>17125 Thank you, sorry for the trouble. Thank you for the full trace. I have fixed this one for 468. There may be more out there. This is related to >>17167 by the way, at least as I understand. Arch went up to Python 3.10, and somehow the PyQt5 they rolled out with it now checks type, so where I am casually sending 3.0 (float) and it used to be ok, it now raises an exception and needs just (3) int. I just need to fix all the places it happens. If you are familiar with python and know how to switch hydrus to PySide2, that is not affected by this. But with luck 468 should be better for PyQt5 in any case. Qt6 is apparently good, btw. I expect to play around with it in the coming months and roll out a test build. >>17126 The interface for it is debug tier, but check out options->gui and the 'frame locations' section. Dive in and fix the wrong flags. >>17127 If you aren't comfortable with discord then this is a no-go, but most of the parser creators hang out there if you want to connect, link is https://discord.gg/wPHPCUZ
>>17146 I can't promise you 100%, I'm sure someone very clever could do bad stuff in some way, but I feel pretty good about saying there is almost certainly no malicious stuff on import. The content of these pngs is JSON, and it loads in my custom serialised object system. Assuming there is no way to attack that (e.g. actually attacking the python json library loads method https://docs.python.org/3/library/json.html#json.loads and executing code), then it all gets converted to my objects that'll run like anything else. One attack I can think of is altering a downloader so it converts/redirects the URLs it finds to another server that tracks what you are downloading. There's no raw code in any of my serialised objects though so I don't think you should seriously worry about someone installing viruses or anything from this. I'm hesitant to add support for raw code for exactly your reasons (although I am told there are good sandbox tools for this tech these days, so I will look into it one day). On the page where you import a parser or something, if you got the parser from someone you don't trust, you might like to just give it a browse after you load it to see what objects it has. If it is inexplicably large and has a whole heap of dodgy transformation regex, then it may not be cool. Click cancel on that dialog, you'll never see it again. >>17147 Working ok here. If you are getting CloudFlare (or similar) captcha checks in your browser and you have a VPN, try hopping to a different region. Those captcha 'I'm under attack' modes are often regional (and temporary). >>17148 Ah, sorry, I think this is because of that custom domain. The hydrus downloader hooks everything together using 'url classes' that are domain based, so while that may be working on the tumblr engine, hydrus doesn't know that (yet). If you feel brave, you can try duplicating the 'tumblr file page' url class under network->downloader components->manage url classes and rejiggering the dupe to that domain.
(2.75 KB 512x111 k009.png)

>>17148 >>17172 Ok I did it since it was more tricky, can't promise this works but I dashed it out. Drop that png on the list in 'manage url classes' and then link it to the tumblr api parser in 'manage url class links'. Fingers crossed, it works.
>>17160 Thanks, I'll check this, and I'll add a unit test to make sure it doesn't break again. Sure, I will add some timestamps to the file_metadata call. This is being reinvented as I move to multiple local file services, but I think I can do it the new way now. And yeah you can't search by file_id/hash_id yet, but I will add this to system:hash in future I think!
>>17144 >>17163 I am afraid I have not been following the full technical details of this conversation. But if you have client.caches.db, client.mappings.db, and client.master.db, but no client.db, then you have very limited recovery options, but you should be able to roughly recover, with manual database work: - the files you had - what tags they had Essentially I think you could recover a 'hydrus tag archive' that you could then import in a fresh client via tags->migrate tags. Unfortunately client.db itself stores most core data and urls and ratings. I can help you recover this info, let me know. You can also email me at hydrus.admin@gmail.com or hit me on discord if you want to do one-on-one. We'd basically be poking around the remaining databases with SQL and constructing a manual HTA.
>>17168 Yeah AVIF is one of those newer image formats. The Alliance for Open Media has a fair number of test files here: https://github.com/AOMediaCodec/av1-avif/tree/master/testFiles Also while I'm not super familiar with the state of Python libraries I did take a look around to see what options exist and it looks like there's a library that provides bindings for libavif in Python: https://pypi.org/project/avif/ It also looks like it supports a library called Pillow that appears to be a successor to PIL. One concern with that avif library is it doesn't look like its been updated since February so it may not work with Python 3.10.
>>17175 Hello. Thanks for clarifying what can be done. It hurts. But I don't know if I can bother doing anything about what I have left. Knowing the files I had and the tags they had seems like a lot, but you make it seem like there was also a lot in the client.db that I wasn't conscious of being part of what made my hydrus experience. I don't know. I'm still archiving, but only using gallery-dl. Unlike hydrus, it doesn't retry downloaded URLs to check if it was reuploaded, in order to give me the new version. So, it's strictly worse. But I don't know. I feel like a husk of a person at this point. The conversation prior was exclusively about trying to recover my corrupted data after I decrypted the hard drive with my only copy of it. The conclusion was it can't be restored by any automatic means, if any at all.
(205.68 KB 780x749 zsfd.jpg)

(118.57 KB 1347x644 15520.png)

>>17168 >In future I will break this and make it so hydrus can store arbitrary data with an arbitrary file extension that it will remember. >dubs YAY! This is the most expected feature for me as I have many hundreds of HTML, SRT, among many other extensions which currently I enter into Hydrus by importing a screenshot of them.
(664.16 KB 989x888 15656.png)

>>17170 >I have heard this is fixed in 467, let me know if it isn't for you! Yes! It is fixed for me in the v467 Linux client. Disclaimer: Post >>17077 stating the bug, and >>17122 declaring the issue solved are mine. Thanks.
Would the filesystem (NTFS vs exFAT) where the hydrus files (not the database) is stored in affect performance?
>>17171 Thanks for the response, but it didn't really answer my question (I'm >>17167). Is it OK to rebuild Hydrus with Python 3.10? Does it already use it? Will this happen automatically with version 468? As a disclaimer, I'm a Linux noob, so I don't really understand how all this works.
Is there a way we can get the ability to clean out the search log the way we can clear out the file log? Obviously you'd only ever want to clear out successes, but I only have 8gb of ram, so I don't like keeping around urls I'm not going to be making any more use of.
I had a good week. I fixed bugs--including the recent annoying scanbar border clicking issue--rewrote some bad code, and added a couple of advanced file operations to search and the client api. The release should be as normal tomorrow. >>17181 >>17167 Ah, sorry! Yeah, should be fine, as far as I know. I haven't tried it myself, so there may be a couple of niggling issues, but I should think its fine. The hydrus AUR package had a problem semi-related to this, but it seems to be fixed on that end now. And I have fixed my end too for tomorrow's release. I don't make the AUR build so I can't talk too cleverly about it though. I don't know how the 'manual update to 3.10' works here for AUR. If it is a button to click somewhere, my official recommendation, if you are not super expert at Linux, is to wait a few days after tomorrow's release. If there aren't more user-reported Arch/AUR tracebacks for some other thing I need to fix, you are great to update. If the 'manual update' is more complicated than that, I am afraid I cannot help too much. My guess is it is a system update button to click or the AUR package will do it automatically. We're on 3.8 right now for my official frozen builds, I expect to move up to 3.9 this year, after some brief testing to ensure github cloud is all ok with it and to see if it causes dll conflicts with an existing install etc.... I'll probably do it the same time as when we go to the new Qt version.
https://www.youtube.com/watch?v=UGnbiBDmuBI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v468/Hydrus.Network.468.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v468/Hydrus.Network.468.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v468/Hydrus.Network.468.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v468/Hydrus.Network.468.-.Linux.-.Executable.tar.gz I had a good first week back. A mix of bug fixes and little improvements. couple of highlights, otherwise all misc this week Thanks for the quick issue reports over the holiday. The scanbar had an annoying thing where the new single pixel border was making it awkward to drag when in borderless fullscreen--you'd move your mouse to the bottom of the screen, but then you'd click the border and not the scanbar, and the whole video would move--this I have now fixed. Also an issue with new imports' 'pixel duplicates' data not being saved correctly--I have fixed the problem and scheduled all affected files to regen their pixel duplicate data. I also put some time into multiple local file services. It was more background work, this time to do with system predicate fetching, so there isn't much neat stuff to show off unless you are an advanced user. Almost all the background work is done now, though, so I hope to start spinning up more complex search UI and actually adding new file services pretty soon. One neat thing, now that I have new file filtering tools, I was able to expand 'system:file service' to search for deleted and petitioned files. full list - misc: - fixed an issue where the one pixel border on the new 'control bar' on the media viewer was annoyingly catching mouse events at the bottom of the screen when in borderless fullscreen mode (and hence dragging the video, not scanning video position). the animation scanbar now owns its own border and processes mouse events on it properly - fixed a typo bug in the new pixel hash system that meant new imports were not being added to the system correctly. on update, all files affected will be fixed of bad data and scheduled for a pixel hash regen. sorry for the trouble, and thank you for the quick reports here - added a 'fixed font size example' qss file to the install. I have passed this file to others as an example of a quick way to make the font (and essentially ui scale) larger. it has some help comments inside and is now always available. the default example font size is lmao - fixed another type checking problem for (mostly Arch/AUR) PyQt5 users (issue #1033) - wrote a new display mappings maintenance routine for the database menu that repopulates the display mappings cache for missing rows. this task will be radically faster than a full regen for some problems, but it only solves those problems - on boot, the program now explicitly checks if any of the database files are set as read-only and if so will dump out with an appropriate error - rewrote my various 'file size problem' exception hierarchy to clearly split 'the file import options disallow this big gif' vs 'this file is zero size/this file is malformed'. we've had several problems here recently, but file import options rule-breaking should set 'ignore' again, and import objects should be better about ignore vs fail state from now on - added more error handling for broken image files. some will report cleaner errors, some will now import - the new parsing system that discards source urls if they share a domain with a primary import url is now stricter--now discarding only if they share the same url class. the domain check was messing up saving post urls when they were parsed from an api url (issue #1036) - the network engine no longer sends a Range header if it is expecting to pull html/json, just files. this fixes fetching pages from nijie.info (and several other server engines, it seems), which has some unusual access rules regarding Range and Accept-Encoding - fixed a problem with no_daemons and the docker package server scripts (issue #1039) - if the server engine (serverside or client api) is running a request during program shutdown, it now politely says 'Application is shutting down!' with a 503 rather than going bananas and dumping to log with an uncaught 500 - fixed some bad client db update error handling code - . - multiple local file services (system predicate edition): - system:file service now supports 'deleted' and 'petitioned' status - advanced 'all known files' search pages now show more system predicates - when inbox and archive are hidden because one has 0 count, and the search space is simple, system everything now says what they are, e.g. "system:everything (24) (all in inbox)" - file repos' 'system:local/not local' now sort at the top of the system predicate list, like inbox/archive - . - client api: - the GET /get_files/file_metadata call now returns the file modified date and imported/deleted timestamps for each file service the file is currently in or deleted from. check the help for an example! - fixed client api file search with random sort (file_sort_type = 4) - client api version is now 24 - . - boring multiple local file services work: - the system predicates listed in a search page dropdown now support the new 'multiple location search context' object, which means in future I will be able to switch over to 'file domain is union of (A, deleted from B, and C)' and all the numbers will add up appropriately with ranged 'x-y' counts and deal with combinations of file repo and local service and current/deleted neatly - when fetching system preds in 'all known files', the system:everything 'num files' count will be stated if available in the cache
[Expand Post]- for the new system:file service search, refactored db level file filtering to support all status types - cleaned up how system preds are generated - . - boring refactoring work: - moved GUGs from network domain code to their own file - moved URL Class from network domain code to its own file - moved the pure functions from network domain code to their own file - cleared up some file maintenance enum variable names - sped up random file sort for large result sets - misc client network code cleanup and type hints, and rejiggered cleaner imports after the refactoring next week More multiple local file services work. I'll convert autocomplete tag results to the new system like I did system predicates fetching this week, and then we should be pretty close to allowing real file searches across multiple and deleted file domains.
>4 thousand pixel duplicates Can I just tell hydrus to automatically set the smallest of each pair as better and delete the other?
New to using additional downloaders, but when it comes to exhentai, would a downloader for it let us search for any tags and download all content that matches those tags? Asking because I found two downloaders called "exhentai galleries tag search" and "exhentai.org gallery lookup (full url)" respectively on Github, but after working with the Hydrus Companion browser extension to grab and import the Exhentai cookies, searching for anything, whether it be tag or url on those downloaders, returns nothing for the tags and ignores all the pages of a url. Did I need to do anything else after adding cookies? Couldn't see an exhentai login script so not sure if there's a place for that but the cookies alone don't cut it here. Any ideas?
>>17175 Can you just tell me outright what I have to do to find out what files are missing? You say I can use "tags->migrate tags", but you also say you'd be using "SQL" to and "constructing a manual HTA". I don't know anything about technology, anything that isn't following instructions I cannot do. >>17163 Anon who walked me through recovery methods, if you're still here, can you tell me if there's any difference between the ddrescue image I created of my "bad sector" HDD, and the actual HDD? Besides the ddrescue image being slightly too big to fit on the "bad sector" HDD, I guess. I ask because I encrypted the hard drive for a reason. I don't care about the porn being viewed, but my browser cookies and the fact that I'm logged into the browser sync means anyone getting their hands on it would be a huge vulnerability for me. I would like to format it if possible, but I was wondering if with infinite budget, maybe forensics would be able to undo how veracrypt decryption scrambled the corrupted data, but only on the original hard drive, and not the image. Even if they could, I know it would never reach the point of testdisk being able to restore its original location and evrrything; only photorec would give me a file with a garbage filename. But still. I say "bad sector" in quotes because, even though you avoided saying it, I think I understand that there is no "bad sector" on the HDD, but rather, it just made an error when reading data, which could have been corrected without issue. I actually experienced the same happening in the past, only, I didn't notice it until I restarted my PC (which I very rarely did). Chkdsk fixed the error. Then I ran a "bad sector" scan with the same program I mentioned way earlier, and it found nothing. So the reality you spared me from confronting was that it was a healthy hard drive, only, it made an error. It can be safely used, provided you make regular backups to correct its susceptibility to corrupting data.
>>17187 I'm not sure if I get the question correctly, the image contains everything your drive contained. The 2TB image will not fit on the 2TB drive, because you need extra metadata to store on the disk. If the disk image is 2TB and your disk is 2TB, you need some space on the disk to say "this file is 2TB large, starts at sector 1, ends at sector 311383187 and is named disk.img, it is not compressed, etc". It is a 1:1 copy of what is contained on the real drive the moment you took the image. Formatting a drive will not delete anything on it, you could use testdisk/photorec on the formatted drive (or an image of it) and recover almost all the data. You have to physically overwrite the entire drive with random stuff so there is nothing to recover anymore. I believe veracrypt does this when you try to encrypt a disk (it should be a checkbox). With disk encryption (veracrypt), you will only recover garbage, since you need to have access to the key for the data on-disk to make sense.
>>17188 I understand those parts. I think you even answered what I was trying to ask, by your explaining the properties of everything involved. I was wondering if the original 2TB hard drive had any bias to it that the ddescue image of it doesn't. Which is to say, if when observing the image, and observing the hard drive, if having the actual hard drive would betray any further context to where the data used to be stored. I guess not. I think you even elaborated that veracrypt might've completely turned my corrupted data into garbage via my "decrypting" it. So there would be nothing to "restore".
>>17189 >I was wondering if the original 2TB hard drive had any bias to it that the ddescue image of it doesn't. Which is to say, if when observing the image, and observing the hard drive, if having the actual hard drive would betray any further context to where the data used to be stored. You need to wipe both the original disk and the disk that contained the image, if that is what you were trying to ask. There is nothing on the original disk that you could restore that is not in the image. Some glowing institutions may have ways to restore data from disks that was overwritten previously, but your disk contained ciphertext, so if you overwrite the original disk again, it should be good. If you are worried about glowies, you shouldn't be using windows anyway, so I assume you are fine with that. You should also start encrypting drives as soon as you get them, so you don't have to worry about what is/was stored on them at all.
>>17190 I understand. Thank you. I wasn't completely educated on three-letter initialism bullshit and such, but I also wasn't trying to be vulnerable to them. I tried my best with Windows 7, at least as far as I knew. Someone would eventually tell me I had to run a specific program to do Windows 7 the proper way, and that my merely running a .bat to uninstall specific updates didn't do it right. But I tried to do well on 7, for what it's worth. Since my OS is randomly damaged, I have to at least reinstall my OS anyway. But I'm not going to reinstall 7. I literally only had hydrus + drivers on this hard drive, so when I move on from this damaged operating system, I'll try linux. But, again, thanks. I will probably try to compress the ddrescue image and store the .7z or whatever file on the "bad sector" hard drive. It's stupid, but, it's otherwise just going to be formatted (which is to say, overwritten 7 times or whatever via some usb boot thing) and sitting there unused until I ever get a desktop setup that can use it as a backup. But not a primary backup, obviously, since it produced corrupted data in the first place. Thanks again for all the help. I still go over it in my mind every day, what led to this happening, and how it could have been avoided. It's hard dealing with this reality. I really in my understanding tried my best. It's really hard. But I'm still here, and the world keeps spinning.
(84.15 KB 1146x1148 bait.jpg)

>>17191 >I wasn't completely educated on three-letter initialism bullshit and such >uppity as fuck
Is there any way to have hydrus check whether any are malformed / damaged? I have been running into these errors after migrating my files to another drive with free file sync and would like to fix any files that may have somehow been copied problematically (some files works on the previous drive, but not on this new one)
>>17193 >malformed / damaged? A sign that the second drive has bad sectors and its lifespan is nearing the end. Time to buy a new one, fast.
>>17193 file maintenance under database menu
Just want to say thank you for this awesome pies of soffware,and ask when do you plan to add video duplicat detector
Here's a suggestion, gallery lists. Works like a simple downloader in that it downloads 1 thing at a time sequentially, but it organizes them like the gallery downloader. Some sites like sad panda and sankaku have anti-scrapping methods that lock you out straight away if you're downloading multiple things at once but 2 at a time is fine. So you could add a bunch of tags or panda links in a gallery but only one is unpaused and once it's done it unpauses the next and so on.
>>17176 Thank you, this is exactly what I was looking for. I'll explore that Pillow integration too. If it is as simple as a bit of pip, I'll happily turn this on (optionally) next week. >>17180 Nah, you should be fine unless you have ten million files or something. I can't speak to specifics since I'm not an expert in it, but I think exFAT is tuned for external flash USB drives in some way? Maybe it is slower in some high performance metrics at the cost of disconnect safety or something, but for hydrus it doesn't matter to human perception if files are delayed a couple milliseconds. Thumbnails it is a little more important since I'll be fetching like 50 at a time, and database it is very important. >>17182 The subscription system does clear out its search log. I forget but I think it starts to delete old stuff at around 100 entries. If you are worried for your downloader pages, I think you are fine, even on 8GB. Each row here is small compared to other data, particularly file imports (my rough feeling is each file log object is about ten times the size of a single gallery log object in memory), so if I were to try to drive down memory use, I would start somewhere else first. My advice to keep downloaders clean is just to clear them out when they are done. Although, if you can tell me about a scenario you are doing that does have very high gallery url count, I'll be more open to putting time into this. Are you doing mass md5 searches or similar? In a side topic, I keep meaning to integrate some Qt-safe memory profiling into hydrus one of these days. When I have that tech finally in, we'll be able to talk with more confidence and make better plans about what is really eating memory.
>>17185 That's the hope. I am going to write the first version of an (optional) system of automatic duplicate decision resolution. Ever since I started dupes, the problem revealed was not finding dupes but the human time of processing them. I will start off with a simple hardcoded ruleset you can turn on of 'delete png pixel dupes of jpegs?', and then iterate on that concept and generalise the rules into something you can customise, and then start to integrate more metrics like 'images are 99.98% similar according to (algorithm)'. Ideally in a few years you'll only be looking at a handful of complicated dupes, and everything easy will be dealt with automatically. >>17186 I'm sorry to say I am not familiar with that downloader (I don't use the site), so I don't know the details. As far as I know, if you use HC to sync your cookies you are good to go. Sounds like the downloader is due an update. While search fails, what happens if you drag and drop an exhentai 'post' URL (like a specific manga gallery) to your hydrus, does it parse and download that ok? If that's ok, then I expect the search downloader is out of date. If that also fails, then it may be something like CloudFlare block (if you are getting 503/captcha related errors), or a significant sitewide change or login issue if you are just getting 'ignored'/'nothing in that document' problems.
>>17187 Sure, although this may ultimately get complicated and need some back and forth. It depends on your version, and more complicated situations won't be a simple fixed path. First off, find the sqlite3 executable in the basic install's install_dir/db folder. Put that in the same directory as your database files. Then run it and copy/paste these lines: .open client.caches.db .out my_hashes.txt select HEX( hash ) from local_hashes_cache; .exit Check the new my_hashes.txt file. Fingers crossed, it has all the hashes of all the files you had. If you know how to do some scripting, you'll be able to use that list to compare with another list (e.g. the same process on a currently 'good' client) to figure out what is missing, but if you don't know how to do that, let me know more details about what is where and exactly what you need to know from that and we can go through it. If you get an error about 'local_hashes_cache doesn't exist', then it will be more complicated, but we still have some ways to try. Let me know.
>>17200 Sorry, my mistake. Don't use ".out my_hashes.txt", use ".once my_hashes.txt". Check out ".help" in the terminal to see all the commands.
>>17198 > Are you doing mass md5 searches or similar? Yes, I was using the send url option on iqdb-tagger, which sends md5 searches for Gelbooru. I also sometimes run pretty big import jobs while I sleep and I'm sometimes worried the url count will balloon to the point of crashing Hydrus, but that's a different problem. (I know that's not the intended usage, but I get a decent ratio of treasure to trash with the queries I run.)
>>17196 Thanks, I am glad you like it. I don't know when video dupes will be good, but I did build my original system to support in in the end. We have an idea on how to do it. Me and another couple of users did some tests last year on that idea and we had good results. So basically what I have to do is: Turn that idea in something real and fast Update the dupe filter to handle videos (I'm sure it needs more more UI actions to be user-friendly) Add some more UI and checks for things like 'this is a 2 second gif of this 12 second webm' So it just comes down to when I can fit that medium-big job in. Before I do video dupes, I also want to spend time on updating the duplicate filter overall. The code behind it sucks right now, so I want to clear some of that out as I do pixel dupes. Then I'll be in a better position on this. >>17197 Thanks, this is an interesting idea. My bandwidth system (network->data->review usage and rules) is over-engineered but does allow some of this fine control. If you want to only hit a particular domain site once every seven seconds, it can do that. But as you say, some APIs have tokenised downloads that time out after 30 seconds, so it would be really nice to have more control over 'do this thing to completion, then start on this thing', so I really need the downloader system in general to start queuing items more intelligently. I am keeping this in mind. Another couple of users have told me about similar. I'm going to be completely overhauling download scheduling this year, fingers crossed, so I hope I can get some improvements in then. >>17202 Thanks. I had forgotten we had some more automated ways of populating these. I'll see if I can just write a quick 'clear this shit out' menu action for next week. Let me know how it works for you.
>>17203 I always just use the clear successful imports menu for file imports so I imagine something similar for search pages will do perfectly. Thanks for the consideration.
>>17195 Thanks for the help, I'll run an "if file is missing/incorrect, then move file out and if has url try to redownload job". Not entirely sure where the client will move the files to but i'll find out soon enough. >>17194 I really hope this isn't the case, since the drive is barely 6 months old. I suspect it might have something to do with doing a freefilesync while the client is working (as well as only using the compare filename / date modification options), but I'll definitely make more regular backups, thanks for the heads up. So far I think I've managed to fix the problem by running a compare file content free file sync from my old drive to the new drive, there was around 5gb/ few thousand files that managed to be affected somehow.
>>17200 >>17201 I did it and it worked. Also, I noticed the "help my db is broke" file in the same folder. I skimmed through it a bit, and noticed it mentioned "chkdsk" (which I was scared to use when I needed it), and that the main thing to backup is just the four files I only learned about after it was too late for me. For the many years I was using hydrus I thought I couldn't afford a backup, cause I thought the only way was backing up all the files in it. I guess that doesn't make sense. But I wish I knew of this before. Not that that has anything to do with anything anymore. But I would've actually read a "help my db is broke" file before anything bad happened, unlike all the countless times people just out of context told me to backup. I felt I could never reach a backup that required copying my entire 2TB hard drive. But the possibility of a hard drive failing is always a possibility, so I would've read such a thing before I immediately needed it. Instead when my hydrus couldn't boot, I felt I had no recourse, then my irrational confidence in my decision making made me turn a recoverable problem into an irrecoverable one. But anyway. I'm still trying to lift my fingers, despite this feeling. But, I can't do any scripting. I only ever installed via default settings, and I didn't move anything yet, so my hydrus media is in "C:\Hydrus Network\db\client_files". I just want to know which files are missing from my hydrus media, to try to retrieve them from the internet again. If I can know the tags the missing media had, it would help. But I think that's all I imagine I'm hoping to accomplish.
Hello! I have the human readable dates setting turned off, but in some places I fell like a human readable date should be used regardless of the setting, like in the download manager logs for example >file recognised: Imported at 2018-10-11 15:02:01, which was 2018-10-11 15:02:01 before this check. The second date tell me nothing new currently. Also, is there a reason the download parser system is what it is? I'd love to have the option to just import a python class that accepts an url to and responds to hydrus with a direct link to a file and a list of tags, instead of doing a billion string conversions to achieve simple logic and selection/manipulation. Thanks!
Is there a way do download all posts linked to a parent post, or all of a users favorites on sankanku complex? two random examples: for a gallery typing "parent:29667083" in the search bar for favorites typing "fav:evar" in the search bar
>>17205 >I really hope this isn't the case, since the drive is barely 6 months old By personal experience I positively know that it takes just a small accident, like to knock over the external drive, for damaged files and 0 bit files to show up; and their number increase as the time pass by. The only solution I know is to replace it as soon as possible and to discard the troublesome drive.
I had a mixed week. The tag autocomplete rewrite was more work than I expected. While I am happy with what I did, most of my time was spent on boring background and cleanup, so there is little on the front end to show for it. Beyond that, I fixed the Linux/macOS write permissions checking problem and did a couple small things in the Client API and downloader search log. I will recommend the release just to people interested in those issues. The release should be as normal tomorrow.
(54.15 KB 423x674 ClipboardImage.png)

By losing data, does this mean session data? I reloaded hydrus and found out that some changes I made with regards to what pages are open was reverted to an earlier state. I hope there's nothing else too terrible going on. I have a downloader with a weight of almost 2 million, which I'll try to cut down later
Is there a process that "undelete" files? I've got a bunch of mistakenly deleted files, and I don't see an obvious way to get around Hydrus telling me the file was previously deleted. It still recognizes the hash, even if I manually add the file without a downloader or anything.
Oh, and to be clear, the files are already physically deleted. Didn't realize the mistake at first, and it was a while ago. If I look for the old match in the Hydrus client, I see the generic thumbnail with the old tags, and no option to restore or forget the file.
>>17212 >>17213 Are you trying to reimport them? If you are doing a file import, there is a setting that allows you to reimport previously deleted files. There should be one for downloaders too, but I am not very sure about that
>>17214 Yeah, I was trying to reimport from scratch. I did find the option to turn off the deletion check under "file import options" for Gallery downloads. Thanks
>>17211 Yeah just session data. Your files and mappings are all safe, but anything front-end like a downloader page could lose shit in this situation and maybe end up in the situation where it just loses a page on next load. Hit the down arrow on the 'file log' of the big downloader page's queries and delete completed items. You won't be able to bring them back on that page, but it cuts down the size of the page significantly.
https://www.youtube.com/watch?v=EStYlmgyOgE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v469/Hydrus.Network.469.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v469/Hydrus.Network.469.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v469/Hydrus.Network.469.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v469/Hydrus.Network.469.-.Linux.-.Executable.tar.gz I had a mixed week. I got some good work done, but only a few things are anything facing the user. This is a simple release. highlights So, I was successful in getting tag autocomplete search to work on multiple and deleted file domains. All the code that previously searched for tags on 1 space now searches n, and any of those n can also be a file domain's deleted files too. I'm actually really happy with how it went, but it was much more work than I thought, and more complicated. I also discovered some other work I will have to do before I can properly allow searches on these domains, so rather than release a borked feature, I tied off my work and am making no big front-end changes. A couple of weird sibling lookup bugs and search inefficiencies are fixed by accident, but that's it for tag search. I might not have put out a release this week, but I messed up something important last week with the 'is database read-only?' check. On Linux and macOS, the test was checking too many permission bits and causing false positives, not letting users boot until they set at least 644 on their database files. The guy who puts the AUR package together helped me fix it all, so if you had any trouble with 468, please give this a go and let me know if you have any more problems. Otherwise, the Client API can now give hashes on file search requests, and I fixed an issue positioning the media viewer video in some cases. full list - misc: - the 'search log' button and the window panel now let you delete log entries. you can delete by completion status from the menu or specifically by row in the panel (just like the file log) - fixed the new 'file is writable' checks for Linux/macOS, which were testing permissions overbroadly and messing with users with user-only permissions set. the code now hands off specific user/group negotiation to the OS. thanks to the maintainer of the AUR package for helping me out here (issue #1042) - the various places where a file's permission bits are set are also cleaned up--hydrus now makes a distinction between double-checking a file is set user-writable before deleting/overwriting vs making a file's permission bits (which were potentially messed up in the past) 'nice' for human use after export. in the latter case, which still defaults to 644 on linux/macOS, the user's umask is now applied, so it should be 600 if you prefer that - fixed a bug where the media viewer could have trouble initialising video when the player window instantiation was delayed (e.g. with embed button) - . - client api: - added 'return_hashes' boolean parameter to GET /get_files/search_files, which makes the command return hashes instead of file ids. this is documented in the help and has a new unit test - client api version is now 25 - . - multiple local file services work: - I rewrote a lot of code this week, but it proved more complex than I expected. I also discovered I'll have to switch the pages and canvases over too before I can nicely switch the top level UI over to allow multiple search. rather than release a borked feature, I decided not to rush the final phase, so this remains boring for now! the good news is that it works well when I hack it in, so I just need to keep pushing - rewrote the caller side of tag autocomplete lookup to work on the new multiple file search domain - rewrote the main database level tag lookup code to work on the new multiple file search domain - certain types of complicated tag autocomplete lookup, particularly on all known tags and any client with lots of siblings, will be faster now - an unusual and complicated too-expansive sibling lookup on autocomplete lookups on 'all known tags' is now fixed - . - boring cleanup and refactoring: - predicate counts are now managed by a new object. predicates also support 0 minimum count for x-y count ranges, which is now possible when fetching count results from non-cross-referenced file domains (for now this means searching deleted files) - cleaned up a ton of predicate instantiation code - updated autocomplete, predicate, and pred count unit tests to handle the new objects and bug fixes - wrote new classess to cover convenient multiple file search domain at the database level and updated a bunch of tag autocomplete search code to use it - misc cleanup and refactoring for file domain search code - purged more single file service inspection code from file search systems - refactored most duplicate files storage code (about 70KB) to a new client db module next week
[Expand Post] I will update pages and the media viewer canvas to support multiple local file domains so we can actually display these new search results properly and have things like 'should we remove this file that was just deleted?' logic work correctly. After that, the only task left for search is to design an optional advanced control for the tag autocomplete dropdown to handle multiple/deleted file domain selection. Then, finally, fingers crossed, you'll be able to fully search multiple domains at once and we can move on to actually creating some, ha ha ha.
Trying to open a new search page from the right click menu on a tag option in your current search query leads to "'FileSearchContext' object has no attribute 'GetFileServiceKey'" on 469.
(41.86 KB 382x521 img.png)

is there a way to sort images by tag name? I have some images labeled #2000s, #1990s, #1980s, etc. I was wondering if there was a way to view them in ascending/descending order by tagged year (this is different from the date imported/modified).
>>17219 I think you can only do that if the tags you're interested in are in a namespace. You'd go to: sort by -> namespaces -> custom -> enter the namespace as per the instructions. I'd suggest sticking all your ####s tags in something like "decade" and optionally have individual years parent their respective decades (e.g. year:2008 -> decade:2000s). That way, you can sort by namespace with "decade-year" which will sort first by decade and then by year (if it's present; files without a year will still be grouped with files sharing their decade).
>>17219 As >>17220 said, you need namespaces. So your tags need to be converted as, for example: date:1890s date:1900s date:1910s date:1920s ... and so on. It is not a big deal, just do a search for "1890s", then tag those files with "date:1890s", and finally proceed to delete the obsolete "1890s" tag. Done.
>>17221 I forgot to mention, in your >>17219 screenshot, the search is "1930s OR 1940s OR 1950s OR ..." Using namespaces the search is so simple as "date:*"
Shortcut to set all selected as potential duplicates when?
(16.66 KB client log.txt)

Having issues updating from 464 to 468 and 469. Client.caches.db bloats to several times it's size and the next day I get malformed database errors when deleting files, it downloads and tags new ones just fine. As per the error message I checked the disk which had a few errors the first time but the other two times it was fine. Then the integrity check, a couple errors in client.caches.db but nothing a clone doesn't seem to fix but within a few hours the same problem appeared, chkdsk returned nothing so I rolled back to my backup of 464, hoping it was just a problem with the version. A week with no issues later I try to update to 469, seems to work fine at first but again the next day malformed database on delete, nothing on chkdsk but there are some problems in client.caches.db. How certain is the warning that this cannot be caused by software? it is in a veracrypt volume to hide from prying eyes but nothing else on the drive seems to be having issues? Anyway I'll try updating version by version next
Alright got some more info. Updating to 465 failed The issue does seem to have something to do with veracrypt since remounting the volume causes it, it seems to run fine before. I've been using this combo for years... Deleting the caches db and letting it regenerate did not work, it corrupted after remounting. Grabbing a fresh hydrus 469 and putting a couple images in worked fine, even after remounting. Using my backup device also running veracrypt did not work either, it's not the physical hard drive. Guess next up I'll just drag it into a regular hard drive, gonna take a while but should show if it's related to veracrypt or not.
Is there a way to tell a gallery to recheck files for new tags?
I have been playing around with this for many hours and just wanted to say thanks for making it - it is great! I still have a lot to learn about how to use it, and there are features I haven't even touched on yet. Do you ever have plans for creating a system to store files in a certain order relative to a parent file? For like a comic, or images that have a specific start and order to them? I would love that. But either way, just wanted to say thanks!
>>17227 P.S I know the guides say it isn't currently designed for this, I was just wondering if its on the table
#>>17228 It's not exactly the same but you can use tags for that Simply name the files title:name and then page:1 and so forth Then you can sort by page number and group by title
>>17205 It will move incorrect files (i.e. malformed due to hard drive problems) that have no URL to a new 'incorrect files' directory in your base 'db' directory, which you can open from file->open->database directory. >>17206 Thank you for letting me know. I will update my help text on backups to talk about this more. >>17207 Ah, thank you, I'll fix that yeah with an override. I will give the rest of a program a skim, but I will likely miss them, so please let me know when you encounter others that seem wrong. I basically reinvented the wheel for the parser system. It is a fault of mine, I have done it elsewhere too. My main hesitation to letting people throw python scripts around was just that I didn't want malicious stuff going around and/or the bigger general mistakes that can happen when arbitrary code is flying around. Not to mention secondary issues like I can upgrade the lego bricks in my sandbox programmatically, but arbitrary code would need to be manually updated every time I rolled out a new change to the file import calls or whatever. Now we have the Client API, I recommend that for any custom solution you need to write. I still like my parsing system overall and am dedicated to making it nicer to edit and share in future. If you have had a particularly awkward editing situation, please let me know how I could make it better.
>>17208 I am not super familiar with sank, but that sounds good. Do those queries return the results you expect on the site in a browser? If so, it should plug into the hydrus gallery downloader the same way. Just try pasting those searches, colon included, into the hydrus search. However, sankaku has been broken on and off on hydrus. They always have bandwidth problems, so I think they engage cloudflare-style blocks from time to time. One other thing is sank hides spicy content unless you are logged in, so you might want to use Hydrus Companion to copy your browser's login (and cloudflare) cookies to hydrus. That parent:29667083 query 'works' for me here except I get nothing but the 'sign up for plus' link since my test client here isn't logged in.
>>17218 Damn, thank you! This is a stupid problem from the rewrites, something I missed. I will fix it! >>17223 Thanks, I will add this. I have shortcuts for other duplicate actions, I think this should be easy to extend. >>17226 This is a tricky question. Short answer is yes you can do it, it is a bit wasteful, but basically just run the same query again with 'tag import options' set to 'force page fetch even...' on both URLs and hashes. Longer answer is this is basically the reason I wrote the PTR. If you want to make a system that re-checks, you then have to think which files will you recheck, which sites, and how often? If you have 100,000 files, should I hit up every booru I know about for each of them every month? Even if it were every three months, it would be easy to run up millions of total html hits for every hydrus user, over and over, just to get perhaps a few hundred new tags. The PTR is tuned to distribute tags efficiently and incrementally. It has proved so popular and successful that it now has its own problems (with almost 1.4 billion mappings, it needs about 55GB of SSD storage space now). But it means if another user parses that html three months after you downloaded, you can get the tags he found without having to do much work. Overall, I expect I will eventually create an automated system to recheck certain files. Particularly with things like md5 lookups (some boorus let you do hash based searches, which opens up some neat technical options). I'll write a maintenance system that has speed throttles to make sure we don't accidentally do too much work and be rude to servers. And for the PTR, in future I will write several filters that let users just sync with parts of it, so they don't have to spend a load of CPU and HDD to sync with it if they don't want to.
>>17224 Hey, I am sorry you have had trouble. Barring me being a dumbass in some way, I am pretty certain that malformed can't be caused by normal SQLite. You can look up longer technical explanations on their site if you are interested, but in general it is one of those things where it is probably possible but so rare that other weirder explanations, like a bug in your OS's disk driver or something, are much more likely. And since hard drive hardware failure is so much more common in comparison to anything else, it is nearly always the first thing to look at. I've never seen it caused by anything but weirdass hard drive hardware/communications in one way or another. Furthermore, if you are not getting software-level errors while this work is going on, I think that eliminates most problem scenarios. If there were a huge error that had to rollback 100MB of a half-commited transaction, there's a bunch of opportunities for data writing to go wrong, but if the client is just doing work like normal and then hits a malformed error out of nowhere, that points to a random external problem. Since you are on veracrypt, that's my top suspicion. I'm sure you have it set up right, but I know some users who had malformed problems when they had veracrypt set to auto dismount. Could there be anything like that, or a write buffer flushing option somewhere, that could be causing a rough disconnect? Since you have a clean db but then a lot of work can cause a malformed db, I suspect something about the way it is writing data when under stress is unreliable. Normally though an encrypted container is super fine and lots of users run hydrus like this, no worries. But I know veracrypt has some funny extra options. Other not dissimilar problems can be auto-backup software, or cloud storage software (like google cloudshit whatever), which regularly try to read data from the database while it is in operation. To help eliminate possible causes, I think I would suggest running the database temporarily out of the veracrypt container, and then running it on a different drive. If it runs ok out of the container, you know it is a setting with that, if it runs bad anywhere on the drive, you know it is that. Let me know how you get on.
>>17224 >>17233 >>17225 Shit sorry this is why I should read ahead more. Good luck, let me know how you get on. Write buffer/dismount setting somewhere is my guess, by your symptoms here.
>>17227 >>17228 I am glad you like it! Let me know if you run into any trouble, and once you are familiar with things, I'd love to know what you found easy and difficult to learn. Feedback from new users is always useful. I have a long term plan to support file ordering metadata. There's a system in the client for eliminating duplicates. That has a side system tacked on right now for collecting 'alternates', which are basically files that are similar but not dupes (work in progress, costume changes, clean/filthy versions, etc...). Hopefully this year I will do a big overhaul of that alternates system and allow qualitative labelling on file relationships and indexing of related groups. This tech will allow ordering of shorter 4koma style collections and WIP progress and eventually scale up to allow full comics, fingers crossed. CBR/CBZ support should also come in the next couple of years and will have related tech on the media viewer side, improving how you would navigate these file relationships, maybe even things like bookmarks. That said, as I insist in the help, my current support for comics is pretty bad, so stick with your ComicRack or whatever for now, they'll likely always be able to do better than me.
>>17230 >Thank you for letting me know. I will update my help text on backups to talk about this more. But it can't be proposed under merely "backup" unless it's specifically buffered with the fact that what you consider to be your personal hydrus environment is just four files. It's just the fact that, for me, truly, backing up my entire 2TB HDD, which only had my hydrus database, was impossible. Since I thought the only backup I could make required a second HDD of the same size, I felt I could never have reached a backup. So every time someone just told me to "backup", it meant nothing to me. No matter what is on the other site of a "backup" link, I would never have bothered reading it, because I assumed the only backup possible was backing up my entire 2TB HDD, which was impossible for me. You would have to amend the title of the resource from "backups" to being "backups and limited-scope backups", or something, for me to feel there was anything I could do in my situation. And the actual reason I lost my data was because my decision making after my hard drive corrupted my data made me render it irrecoverable. So even a further amendment of "backups, limited-scope backups, and a broken db" would lead me to it if I remembered it once my hard drive partially corrupted my data. I don't expect anything to change because of my situation. In the end, the reality is most people who hear of backups will make backups. It's enough for them to be told or reminded it's an option. For me it was different. I was using hydrus for years without any backup in any capacity. I don't know how I would've come across it being possible on a limited scope, or what to do when my db broke. I didn't come across it when I needed it. Also I am still trying to figure out which files are missing, and the tags they had. Can you elaborate on what I should do now? I did the ".once my_hashes.txt" thing and it worked. And, I am missing my client.db, but I have the other three .db files.
>>17235 >I have a long term plan to support file ordering metadata. There's a system in the client for eliminating duplicates. That has a side system tacked on right now for collecting 'alternates', which are basically files that are similar but not dupes (work in progress, costume changes, clean/filthy versions, etc...). I'm not the one you replied to, but how would I get Hydrus to check for "similar" files in my database?
>>17237 new page -> special page -> duplicate processing
Is there a way to limit results to tags that only show up on one image (i.e. only search for creator tags that exist on only one image)?
(829.78 KB 1330x665 za.png)

(712.40 KB 1157x669 zb.png)

(123.70 KB 733x608 zc.png)

>>17239 >limit results Sure. 1- Search for a tag. See pic 1. 2- From the tag list (showing the tags for ALL files in the search), double click on the tag of interest. See pic 2. 3- Done. See pic 3.
Some other suggestions Steal some of honeyview's features >fixed zoom, once toggled it won't change the zoom level. If you zoom in to 50% and change image, the next image will also be zoomed in to 50% >when files are zoomed in pressing pgdn will scroll down, if it's already on the bottom it'll change to the next image. Extremely useful for manga, large imagesets, or just seeing long pictures without having to move it around with the mouse, also arrow keys let you move around the image. (these are the honeyview keys, changing them to different keys to keep the current scheme or just leaving it up to the user via shortcuts also works) for galleries >sorting items by % downloaded >sorting items by number of items downloaded, regardless of total >sorting items by number of items left to download, regardless of total >>17232 >Overall, I expect I will eventually create an automated system to recheck certain files. Particularly with things like md5 lookups (some boorus let you do hash based searches, which opens up some neat technical options). I'll write a maintenance system that has speed throttles to make sure we don't accidentally do too much work and be rude to servers. Since some sites (e.g. boorus and panda) have fixed urls, couldn't you let the user somehow tell hydrus to just trust that the file is the same and only download the tags?
>>17241 Her face doesn't look right
It looks like updating the DB on an unsecured drive worked, I have not tried fiddling with veracrypt settings but doing it this way turned out to be quicker then I expected In case anyone has troubles updating to 465+ using veracrypt: Make a folder named "Hydrus Network" somewhere on an unencrypted drive Make a new folder in there named db Copy over everything in 'Hydrus Network/db' except for the 'client files' and 'server files' folders Download and unzip a new version of your choice to the folder containing the new 'Hydrus Network' folder (I used 469) Open client.exe It should mention that it can't find the files Copy the link of your 'client_files' folder and put it in It should start updating, wait for it to finish Verify that the newly updating DB is properly working by doing an integrity check after a pc restart (database -> check and repair -> database integrity) Copy over all the DB files to the actual install Don't forget to actually update your actual install to the same version
I had a great week. I finished the multiple file service rewrites and can finally start extending search. First off, for tomorrow's release, all advanced users will be able to quickly search their deleted files. I also fixed some bugs and improved some file handling, including the test that detects whether a video has an audio stream or not. The release should be as normal tomorrow.
>>17230 Thanks for the parser explanation! Currently I'm just trying to figure out a way to import pixiv ugoira(their animated gif replacement that is just a set of static images and instructions on how to combine the frames) with subscriptions. Remaking the entire system to use client API seems wasteful because the static images get imported without problems. I've had two ideas for ugoira so far: 1. After each sub fetch look for new veto'd entries in whatever table in whatever sqlite file holds subscription info and then feed just those url's into my script to download ugoira, turn them into proper files and add them via API. The downside here is I can't just leave hydrus on while I'm busy with other stuff and have it do everything by itself in the background. 2. Make a redirect in the parser in case the entry is an ugoira to a custom site, like the twitter video parser does, and have it download the ugoira and convert it into a video and then just respond with the resulting file bytes to hydrus. The only caveat is I'd need to set the hydrus global connection timeout to something like 200s to allow download and conversion to happen. If there's some other way to do it that I'm missing or one of those ways will lead to issues, I'd be happy to hear what your thoughts are.
https://www.youtube.com/watch?v=_SSPdeoAHsM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v470b/Hydrus.Network.470b.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v470b/Hydrus.Network.470b.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v470b/Hydrus.Network.470b.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v470b/Hydrus.Network.470b.-.Linux.-.Executable.tar.gz I had a great week. The big rewrites are finally doing something interesting, and I got some other stuff done as well. cleverer search First of all, I finished my file domain rewrites. File search, tag search, and all the UI that displays it now works on n file domains at once rather than just one. Also, any of those n can be the deleted files of a service now. You can't make new local file services yet, obviously, but all search is now ready for it. Since adding deleted files search was easy, I am adding it this week. Users in advanced mode will now see a list of deleted file domains on the tag autocomplete file domain button menu. While it is rare you ever want to do this, it was never actually possible to search these domains completely before, and definitely not quickly. Give it a go and let me know if you run into any trouble! The next step here will be to write a new widget, probably some sort of checkbox list, that lets you select multiples of this new list. Then in future, if you had, say, a 'sfw' file service and a 'nsfw' service, you would be able to seach either or both at once easily. Then, I'll have a handful more things to do: an expansion to file import options to determine where imports are going, cleverer trash that supports n locations, and migration tools so you can move/copy between services, and then I _think_ I will be able to just let you add new local file services in manage services. misc I spent a little time with weird files this week. I added support for audio-only mkvs/webms and improved the test that checks whether a video with an audio track actually just has a silent one. I know I get annoyed when a video seems to have audio but actually doesn't, so this week will queue up all your videos for a recheck and hopefully fix a bunch. I also fixed the colours of some weird LAB TIFF files. If you have some jank test images in other unusual colourspaces, please send them in, it was fun figuring this one out. The program should also shut down a little quicker now! full list - multiple file services: - I finished the conversion of all UI search to the new multiple location object. everything from back- to frontend now supports cleverer search. since searching deleted files is simple to add, users in advanced mode will now see 'deleted from...' in a new list in the tag autocomplete dropdown file domain button - the next step is writing a widget that allows multiple selection, and then all this should work right out of the box, and we'll be an important step closer to allowing multiple local file services - . - misc: - the video parsing routine is better at detecting when a present audio track is actually silent (and hence when it should mark a video as 'no audio'). all video with audio will be requeued for a metadata reparse in the files maintenance system on update - fixed an error from last week when trying to create a new page from the tags (e.g. middle-clicking them) in the active search list - added 'audio mkv' format to the client, to represent mkvs without a video track. I think most of the time this is going to be audio track webms from youtube-dl and similar - added 'file relationships: set files as potential duplicates' command to the 'media actions' shortcut set - I expanded the 'backing up' section in 'installing and updating' help - I wrote an 'anti-virus' section for 'installing and updating' help, since I kept writing the same basic spiel about false positives. please feel free to point people there in future to relieve their concerns - improved some shutdown tests, the client and server should exit faster in some cases (e.g. when a hydrus repository network job is hanging on reconnection attempts, holding up the 'synchronise_repository' daemon shutdown) - the 'file was xxx at (y timestamp), which was (z time units) before this check' line in file import notes now always puts 'z time units' as that, ignoring the 'always show ISO time' setting, which was just substituting it with 'y timestamp' again. let me know if you spot other bad grammar with this setting on, I'll fix it! - fingers crossed, images in the LAB colourspace _should_ now normalise to sRGB with the correct whitepoint. thanks to the user who provided example test tiff images here. this now uses the new PIL-based colourspace conversion I used to make ICC profiles work, just on LAB->sRGB. as far as I understand, OpenCV uses a fixed whitepoint of D65, resulting in yellow/warm conversions for some formats, but PIL may be able to figure out if D50 is needed??? if you have some crazy LUV or YPbPr or YIQ image that shows up wrong, please send it in and I'll see what I can do!
- boring rewrites and cleanup while doing file service work: - many more UI objects now store and do file service logic using a more complicated 'location context', which can store a mix of multiple services and 'deleted from service' data. all the search code that works on this can now propagate to display: - the management objects behind every page now store a multiple location object, not a single file service id - all media panels (the thumbnail grid on a page) are now instantiated by a multiple location object, and when they serve a highlighted downloader, they now inherit that from the file import options, which in future will dictate import destinations - all canvases are now the same, inheriting their new location context from their parents - all tag lists are the same. mostly they don't care much about file domain, but when you middle-click to create new pages from the autocomplete dropdown list or active search list, it can matter, so they now propagate it along - the underlying medialist objects are now the same, and various delete logic (e.g. 'should we remove this thumb we just deleted?') is updated to work on complex domains - some duplicate lookup code now works on location context - renamed 'location search context' object to 'location context' since it is used all over now and put it in its own file. also wrote it some neater initialisation and meta object code - mr bones now gives duplicate data based on the union of all non-trash local services sans update files (another case of now supporting n services but n is fixed for the moment at 1, 'my files') - a bunch of places across the program that used to default to 'my files' or 'all local files' (which is everything on disk, including trash and repository update files) now default to this new union of all non-trash local media services - when doing page-to-page file drag and drops, the location context is now preserved (previously, the new page would always be 'my files') - whole heap of other cleanup in these systems - when a thumbnail cannot be provided (for deleted files or many 'all known files' situations), the thumbnail cache now provides the hydrus icon stand-in instantly, no delayed waterfall - fixed an unusual situation where the file search could not provide a file in a tagless search when that file had no detailed file info row in the database. this seems to effect a legacy borked row or two in the new deleted file domain searches - removed some ancient dumper status code from thumbnail objects next week I have been concentrating on multiple local file services, so I want to take a step back and do some normal work for a week. I have bugs to catch up on and I think I'd like to do something fun, maybe 'file last viewed time', if I can fit it in.
why would a video download show as less in the archive then it does if it is downloaded into file explorer?
>>17248 smaller file size
>>17248 might be a mebibyte vs megabyte situation.
is it possible to change the color of the little scanbar nub thing for the media viewer? I use dark mode and a dark windows theme thing so it ends up being virtually indistinguishable unless I click on the scanbar
>>17246 Hi Devanon, is there a option to have collections not merge files with multiple instances of a certain namespace into a new collection? For example, if I have 2 chapters of images, tagged with chapter:1, and chapter:2 respectively. There just so happens to be some images found in both chapters (perhaps with different page numbers). When collecting by "chapter", there will be 3 collection groups, images with "chapter:1" only, images with "chapter:2" only, and images with both "chapter:1" and "chapter:2", which is not conducive for browsing. I would much rather that everything with "chapter:1" is collected, and then everything with "chapter:2", even if this results in duplicate entries.
>>17236 Thanks. I tried to get this idea into the new 'getting started with installing and updating'. I hope we'll catch more people in future. For your 'my hashes' file, it should have a whole bunch of hex strings like: f14d26af1f5cedf492e15d47b51ce40f27f02fb0f1de7ad390797c05b6c46892 As here >>17200 , if you do the same SQLite job on a 'good' client, you will then have a list of files on the bad client and list on a good client. If you know how to do some scripting, you will be able to read in both files, make lists of the hex hashes, and then output the difference, which will be the list of hashes in the good client but not in the bad, or bad but not in the good, whatever it is you want to fetch. I am not 100% sure what your source of 'good' hashes is though, so maybe your 'good' list comes from client_files directory or whatever it is you are working with. Once you have a list of your missing files, I am not sure the next step you want to do with that list, but let me know if I can help. system:hash can now take multiple hashes in a paste, which may help. To transfer your tags, if you still want to do that, that will be a slightly more complicated job, but what we will do is create an 'HTA', Hydrus Tag Archive, which is an external file that allows you to move a lot of tags between one client and another. First off, open the sqlite terminal on your bad client again and try this: .open client.db select service_id, name from services; .exit Make a note of the service_ids related to the tag services you want to move tags for. I assume it'll be the one named 'my tags', but your situation may be more complicated. If it is just the one service, then if the service_id is x, e.g. 3, then make a table name 'current_mappings_x', e.g. 'current_mappings_3'. That will be the tag mappings for that service_id. Then make a backup of the database. There is a bunch of SQL here and I am not sure it is perfect. Boot a new terminal and do this: .open my_hta.db CREATE TABLE hash_type ( hash_type INTEGER ); CREATE TABLE hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES ); CREATE UNIQUE INDEX hashes_hash_index ON hashes ( hash ); CREATE TABLE mappings ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ); CREATE INDEX mappings_hash_id_index ON mappings ( hash_id ); CREATE TABLE namespaces ( namespace TEXT ); CREATE TABLE tags ( tag_id INTEGER PRIMARY KEY, tag TEXT ); CREATE UNIQUE INDEX tags_tag_index ON tags ( tag ); INSERT INTO hash_type ( hash_type ) VALUES ( 2 ); ATTACH "client.mappings.db" as cm; INSERT INTO main.mappings SELECT hash_id, tag_id FROM current_mappings_x; ATTACH "client.master.db" as cma; INSERT INTO main.hashes SELECT DISTINCT hash_id, hash FROM current_mappings_x CROSS JOIN hashes USING ( hash_id ); INSERT INTO main.tags SELECT DISTINCT tag_id, namespace || ":" || subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace != ""; INSERT INTO main.tags SELECT DISTINCT tag_id, subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace == ""; INSERT INTO main.namespaces SELECT namespace FROM cma.namespaces; .exit It may take a while. Hours for some lines if you have a giga db. You have three places to replace current_mappings_x with a number, too. Once done, you will have a my_hta.db file that should be importable as a Mappings Source in any client using tags->migrate tags. Let me know how you get on.
>>17237 >>17238 Also some more help here: https://hydrusnetwork.github.io/hydrus/help/duplicates.html >>17239 >>17240 If you want to do 'show me the files that have unique tags', where unique means tags with count 1, I don't think that's available now. Maybe if you are a madlad, you could try typing 'creator:*', wait, and then press up arrow to get to the bottom of the large results list, then shift+page up to select several pages worth of (1) count tags, and then right-click and say search->open a new OR page.... You'll have to enable crazy search under tags->tag display and search to let the wildcard 'creator:*' work. Note that danger and CPU hell awaits in this realm. Best to try it out on a small tag domain or set of tags first.
>>17241 Thanks. These are great ideas. I particularly like the idea of having page down do scrolling until you reach the bottom of the file. I keep meaning to write some more zoom options like zoom lock, but I agree. And for gallery search, I also keep meaning to do the next expansion of my multi-column list manager to enable hide/show for columns. Then I'll let you add '% done' or whatever column type you want to show and sort by without eating up too much UI space. >Since some sites (e.g. boorus and panda) have fixed urls, couldn't you let the user somehow tell hydrus to just trust that the file is the same and only download the tags? Yeah. I phrased that badly. I would use hydrus's 'known URLs' for rechecks. The md5 lookup stuff is for a related but different problem which is 'I have a bunch of untagged files, can we check gelbooru to see if it has tags for them?'. Same deal, I would be feeding that queue of files into a maintenance system that did tag lokup requests slowly so as not to be rude. That system would also grab URLs, now I think of it, when it found hits, and save them back along with the tags. >>17243 Glad you are sorted, let me know if you need any more help. One side thing that I don't explain well with the 'find your missing files on boot' repair dialog--once you have everything where you want, please hit up database->migrate database and make sure your locations are correct. That repair dialog doesn't update your preferences, it just makes a patch on the current db locations, so it may still be storing a missing location as the 'ideal'. Just to keep things clean, you'll want to give the correct 'current' location some weight and remove the old broken 'ideal' location.
>>17245 Yeah, ugoiras are a pain. I've been saying 'one day' for years, but one day I'll have native support. Probably an apng conversion, or webm. Since I learned that most ugoira are jpegs rather than png as I originally believed, I've been leaning more to the 'lossy but high quality' webm side, although I forget how good webm is at variable frame rates. What solution are you going for, for conversion? Have you found one file format better than another for actual IRL ugoiras? Another solution I've thought of is just bundling the frame timing JSON in the Ugoira zip and rendering it in my native video renderer with a custom renderer thing. For your problem, I think what I'd do is the redirect idea, but redirect to your API. This assumes it is easy for you to boot up a small web server on localhost for your script. Have hydrus hit an URL like http://localhost:12345/ugoira?url=(encoded pixiv ugoira url) instead of the veto. That server then returns 404 or something, but your script gets told about it and downloads the URL and does processing and adds it via the Client API. I think you can even associate the URL too via API. You can't pull ugoira vetoes from the database unfortunately. Subscriptions are saved as a load of grimey JSON rather than database rows, so it would be too much of a pain in the ass to load and parse. You'll want to have it trigger from the downloader doing work one way or another.
>>17248 Can you give an example URL and maybe the actual file sizes you are seeing so I can check it out? It is probably as >>17249 and >>17250 say, just a display issue, but if the difference you see is bigger than like 7.1MB vs 7.2MB, it may be a downloader is pulling a preview or something. I don't do any conversion on import beyond bmp->png, and files are never changed once imported, so you should be getting exactly what a browser would see from the same URL under the same conditions. >>17251 At the moment, I currently fudge all the colours using system panel colours (like 'button colour', 'background panel colour' etc...) so you can't change it beyond changing your UI colours. Thank you for the reminder, I keep meaning to try to colour it with custom QSS stylesheets, then you'll be able to set any colour you want. I need to figure out some new tech here, but this is a good place to try it out. >>17252 Not yet. I have never been happy about collections, in part because of this reason. I always wanted a single, 'perfect' solution but could never figure it out. I agree that the way forward is to add more options on how the actual combination math works. I added the 'collect/leave unmatched' not all that long ago, so how about I add another menu button like that? I have difficulty thinking about the verbiage of the logic, so what exactly would it say? 'Collect according to all tags in a namespace' vs 'Collect according to first tag in a namespace'? That's what's going on, but man it does not sound human. I guess rather than vacillating I should just get some code out there and we'll iterate on it based on how it works.
>>17253 >First off, open the sqlite terminal on your bad client again and try this: Are you sure this command is correct? I am missing my client.db.
>>17253 Also >Thanks. I tried to get this idea into the new 'getting started with installing and updating'. I hope we'll catch more people in future. This wouldn't have saved me. I truly could never have reached a backup when it required backing up all the 2TB worth in media I had in my hydrus. It would only have been thinking I could do something in my situation that would have saved me. I don't know how else to say it. Remembering that something with that title exists when my hard drive corrupted my data would not have made me want to read it. But if you don't want to phrase anything as a compromise, I don't know. Most people would be able to afford a backup when they're archiving. I don't know.
>>17253 Sorry for responding to you three times and not saying much each time. About resources saving me from my situation, to me there is no difference. I just installed hydrus to a new location, and when I open it, under the "database" tab, when I read it, none of it communicates that my "database" is just four files. When I first opened this hydrus install, I think it told me to backup my data, and read the help. When I open the help, very near to the beginning, it says "installing, updating and backing up", with the "backing up" part being in bright red. I don't know how else to explain it. For me, the out of context word "backup" was simply out of reach. It meant nothing to me. It actually made me feel bad seeing it. I could never have reached a reality where I could back up my entire 2TB hydrus folder. If you click "set up a database backup location" under the "database" tab, the first line tells you that your "database" is just four files (plus your medis directory). If you read the "installing, updating and backing up" part of the help, it also makes you conscious of the four .db files, and that you should prioritize backing those up. But this info is exclusively under the out of context word "backup". So even though I've read the text under the "database" tab hundreds of times in the years I've used hydrus, not once have I clicked on the "set up a database backup location" button, which would've taught me about the option to make just a backup of my four .db files. Even just a slight difference in wording would've led to me realizing I could just backup my four .db files, because in my ignorance I conflated "database" to meaning all the media in my hydrus. But only since rendering my data irrecoverable did I learn "database" means four .db files + the medis directory. Sometimes "database" only means the four .db files. Were it worded as "backup your .db files", I imagine I would've clicked it, then learned of them, and habitually backed them up. But even without that, the only backup option offered under that tab seems to include your media. If there was an option to backup without backing up your media, I would've done it, simply because it would be possible in my situation. I don't think anyone else could ever know the dread of it being literally impossible to reach backing up your data. It wasn't a matter of being poor, where if you had the money, you would backup. For me, it was impossible. I could never have had a backup where the only option were another hard drive of equal or greater size. So I could never click on anything involving backups; I could never read anything on the other side of a "backup" link. I could never reach that reality. My only option was something bad happening to me first. You don't have to change your approach because a single person was divorced from backup being a possibility, so they couldn't even educate themselves on how it could be done. But, for me, the words I read needed to communicate to me that there was something I could do, even if I couldn't have a full-scope backup. If it didn't communicate a compromise in a backup being possible, I could never touch it; I could only read its title, feel there's no hope for me, and give up.
>>17253 Sorry for responding to you four times. I really need spoonfeeding. First I installed hydrus to a new directory, and booted hydrus from it. I couldn't figure out how to tell it that I want to use the "client_files" folder that suffered data loss + a missing client.db. So I just replaced the "client_files" folder of the new install with the "client_files" folder that suffered data loss, and booted from the new install directory, with nothing else changed. I think it didn't recognize that I had done that, so I closed hydrus immediately. I wonder if I ruined anything by doing that. So to be clear, what I (think I) want to do is: -boot hydrus, using my "client_files" folder that suffered data loss as the only media source -run the same "SQLite job" from before and that's all so far, to produce a second "my hashes" file that can be compared to determine the files that are missing. But, I am currently stuck on trying to use my old "client_files" folder as a media source in a new client, since I don't know how to do that.
I'm on 467 and "must not be pixel dupes" keeps giving me a bunch of pixel duplicates, was that fixed or should I post some examples? >>17256 >Another solution I've thought of is just bundling the frame timing JSON in the Ugoira zip and rendering it in my native video renderer with a custom renderer thing. As someone who's autistic about quality I'd rather that over conversions, at least as an option. Also, is downloading ugoiras as zip files an option yet?
So what directory do files uploaded to my hydrus server get saved to?
>>17258 Damn, sorry, I forgot your exact situation. No problem, we'll just have to infer the correct table name. .open client.mappings.db attach "client.master.db" as cm; select name from sqlite_master where name like "current_mappings%" and type="table"; (this will hopefully give you a short list. for each one, let's read some data): select namespace, subtag from current_mappings_x natural join tags natural join namespaces natural join subtags limit 100; .exit So try that out for the different mappings tables, and one should have a bunch of results that look like the tag service you want to preserve (I am guessing 'my tags'). Boost the limit up to 1000 if you want to see more of a sample. >>17261 Don't worry, you haven't broken anything. Hydrus never deletes anything from client_files unless you tell it to. This may not fit you perfectly, but the 'traditional' way of getting a client to handle surplus files in client_files is running database->db maintenance->clear orphan files. This will let you choose a folder to export to, and it will then scan every file in your client_files--any that it does not have a record for, it will move to that folder. Once the job is done, you can then import that folder or orphans back into the client (or generate hash lists, whatever it is you want). If it helps your thinking here, hydrus does not use client_files for most of its searches or calculations, and the database it technically ignorant of it most of the time. (It would be too expensive to keep rescanning its files). Therefore, if you mean to kind of 'install' a client_files as a new file list on a client, I do not think you can do it quickly. The way you get files into a client is through regular file import, so if you have a brand new empty client and want to get 2 million files into it, you will be importing them. The above 'check orphans' routine is a way to clear the files out so you can reimport them, but it may not be the most efficient way to do what you want. It is normally to get a client with some file records in its database (i.e. system:everything > 0) to split up what is and isn't an orphan, but on a new client, any file in client_files is an orphan, so if you set off the big scanning routine, you are kind of wasting time. You might just want to move client_files out again manually, make another fresh empty install, and then just literally import the client_files folder structure you just moved out, in batches, to the new empty client. BUT if you at this point just want a list of the hashes that are in that client_files folder, then all the filenames are hashes. You can pull them with some scripting, or, if you are not familiar with python or anything, you can probably wangle it just about with windows command line like this: shift+right-click on the folder, open powershell cmd (this brings up the old terminal) dir /B /L > hashes.txt (this puts all the filenames into that new text file) But they would have file extensions, so you'd need as script to pull those off.
>>17262 I think I did fix something to do with pixel hashes, yes. And that search problem I think was a typo. Please update and let me know if you still have trouble. Giving a quick look at the changelogs above, I think I scheduled some new work as well, so it may take some time for good pixel hashes to eventually come in, particularly for new files. Yeah, and I agree about having perfoct preservation options for Ugoira. That's why I am hesitant about just munging them to webm like danbooru do. You can import an Ugoira as a zip to hydrus now, but the file format actually has a 'sidecar' of (I think?) JSON data for frame timings. I think some Ugoiras put that JSON in the zip, but many/most do not, it is served by Javascript on Pixiv or whatever, and it means any downloader we figure out will either have to have additional database metadata for frame timings that replicates that JSON, or I just archive the .json data in the zip itself, breaking my rule of editing files to be imported, and just read that JSON any time I load the file. Hydrus just can't support two files being one file (another example would be an mp4 and an srt subtitles file), so I don't think I can download the JSON as a separate file any time soon. It is unfortunately a mega pain in the ass format, and if I can wangle perfect data preservation with good variable frame timings in apng/webp/lossless webm, that might just be the answer I settle on. I can always generate a zip and some frame timing JSON on export if people ever want to recreate the original Ugoira. >>17263 In the server's hydrus install, there should be install_dir/db/server_files. Similar structure to client_files, except files and thumbs are stored beside each other, files have no extension, and thumbnails have '.thumbnail' extension.
How do I increase the size of the thumbnails in the client
>>17248 >>17257 https://gelbooru.com/index.php?page=post&s=view&id=6755004&tags=arm-763 downloading manually shows 18.1MB in file explorer, using the tool it shows 17.1 within the program at the bottom left. I would think it would be bigger since it includes tags
(69.33 KB 1084x645 Scr13215.png)

>>17266 File/Options/Thumbnails Then you have to play a bit with the values.
>>17256 Thank you very much for the idea, I think that's exactly what I'll go with. Barring needing to handle the edge case when hydrus closes/crashes before the client api can add the file, I see no downsides to this solution. Thanks! >What solution are you going for, for conversion? I'll probably just take the lazy route and use the pixivutil2 script with its default ffmpeg options for each individual item download. It's what I'm using right now to automatically download ugoira en masse from artists I'm subscribed to, which then get added to hydrus with a custom script using the api. I do have the intention of experimenting more to find out an optimized ffmpeg conversion command that allows for maximum quality with a more reasonable filesize since the average output filesize is around 15MB for 3 seconds due to making a "lossless" conversion (the top percentiles are even quadruple that size for the same length). >Have you found one file format better than another for actual IRL ugoiras? Haven't experimented much so I just go with what is more convenient for me. I prefer to convert to webm since it's a format I feel is already expected by many to be used for small length videos and I have it configured on my PC to be viewed as a format that automatically gets looped during playback. I considered gif but many image viewers really can't handle a gif of over 20MB very well (trying to view a heavy gif in hydrus from a network drive is also quite a slow experience due to how they are loaded and parsed).
>>17257 Not sure if related to this case but gelbooru has this funny thing going on with videos where it has two files for an entry. Like in the link from >>17267 for example, the "Original image" link (which hydrus downloads) points to a webm, but the file that's embedded into the the video player in the page is a mp4 file and the file that in the api details for the entry is the mp4 file. Even better is that the md5 hash from the api does not match either of those files but only the original source file (that we cannot access). Gelbooru has always converted the video files that got uploaded to it into webms but I think after sometime they also started to convert videos into mp4 so that apple plebs could watch them as well. Not sure which one of these is the derivative or the original but this does end up fucking up with the PTR since it seems like only the webm gelbooru files get their tags associated with them and not the mp4 ones :(
(26.54 KB 609x697 Untitled.png)

>>17264 Insert mandatory bitching here about this being one of the worst things that could've possibly happened to me. I just did this. I have a .hta file, and a hash list of "client_files" itself. I'm surprised producing the .hta file didn't take the "hours" you warned me it could if I had a "giga" db. I was using hydrus for years, and had 2TB in files. I don't know. It only took a few seconds for me. Maybe a minute or two at most. I couldn't just do the powershell thing. On windows 7 (don't know about any others), I couldn't right click > "open powershell here". I had to google how to navigate by just opening powershell bare ("Set-Location 'C:\Hydrus Network\db\client_files'"). Then the "dir" command only output the folder names (and the "hashes.txt" the command itself generated). So I had to google how to make "dir" output everything in all subirectories and I added "/S", which made it work. Half the filesize is just .thumbnail files. The folders are listed first, then the media in alphabetical order, then the thumbnails. It looks like it's just the folders being in alphabetical order that put all the .thumbnail files right at the bottom. I didn't realize I needed to reimport my entire "client_files" folder in order to check the tags they had. This is death. My replacement HDD has been showing an error in "HD Tune Pro" since I got it. I was unsure if it could be attributed to my Windows 7 install just having 1GB of random data corruption that I rendered irrecoverable, yet I keep using the OS. But my write speeds are garbage. I already tried shutting down my laptop, removing the battery, and moving and re-inserting the hard drive, yet I've still gotten one more error since I last checked. When I try writing data to this HDD, it freezes for hours. Even when I was just moving data from this HDD to an external hard drive, it had virtually no read speeds. Even though I have 50 gigs free on this HDD, I write to my secondary HDD that has less than 20 gigs free. I don't think I can afford to import my 2TB hydrus "client_files" folder on this HDD. Again, I don't know if it's the HDD's fault, or the OS being damaged. But trying anyway will basically freeze my laptop for months, all the while the read/write speeds will be 0 or as close to it as possible.
Hello. I know Hydrus keeps track of deleted files. Is there a way to access that data and re-download those urls and files? I'm using Hydrus 470. Thanks!
>>17271 >I don't know if it's the HDD's fault, or the OS being damaged. A CRC Error is 100% related to hardware. Your disk is dying. It may be the disk itself, or in very rare cases the motherboard malfunctioning. Either way, you need an urgent replacement.
>>17273 Thanks for clarifying. This is fucked up. It's brand new. I hope I can still return it. This is obviously an enormous pain in the ass, cause it was replacing a hard drive that I had considered to be dying.
>>16965 Many thanks for Hydrus my niggers. It only has one fault which is: being written in a gay faggot dick ass language. But that doesn't really matter. Thank you faggots.
>>17274 Be aware that if your rig is a desktop, there is a remote possibility that changing the IDE flat cable might be solve the problem. Who knows. For experience I can assure that 99,99% of new failing disks are caused by hits or strong vibrations while running. Faggots don't realize that the device is mechanical in nature and extremely sensitive to sudden accelerations, and once the internal arm scratch the surface of the plate... bye bye data.
>>17276 >scratch *scratches
(708.44 KB 2893x4092 93009347_p0.jpg)

Hey guise, is there a good comparison between hydrus and shimmie? Is there any reason I should use shimmie over hydrus?
>>17276 It's a laptop hard drive. I got it from amazon, so maybe it was damaged in shipping. Even though it's a laptop, I never move it.
>>17276 >IDE flat cable What decade is this?
>>17280 You would be surprised of how many old rigs are still in service and working with external USB hard drives for storage only.
>>17278 Isn't Shimmie an image board? Hydrus is the most advanced downloader I've ever come across. The "Comicrack" of booru downloaders.
I had a great week doing some fun stuff along with catching up on older issues. I managed to add tracking for 'last viewed time' for files, along with search and sort capability just like for import and modified times. There's also the start of custom colours for the video scanbar using the new style system. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=HSdNxvFCZj4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v471/Hydrus.Network.471.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v471/Hydrus.Network.471.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v471/Hydrus.Network.471.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v471/Hydrus.Network.471.-.Linux.-.Executable.tar.gz I had a great week. The program can now track the last time you viewed a file. last viewed time By default, file viewing statistics are on. You can control how they work and turn them off under options->file viewing statistics and if you like clear all your records under the database menu. If you have them on, hydrus now records the last time you saw a file in the media viewer and preview viewer. A 'last viewed' here counts with the same rules as recording view time (typically you need to look at the file a few seconds), so if you just scroll through ten files in a second, those won't be recorded as views. You can see the last viewed time in the thumbnail right-click menu with the other view time numbers. You can also search and sort it, like with import or modified time. These three 'time' sorts are now grouped into a new 'time' entry in the sort menu, and for searching under a new 'system:time' system predicate. Let me know how this works for you! The search and sort uses the media viewer number for now, but if you are a preview or combined person and this does not work for you, let me know how you would like your situation supported. other highlights I added some new ways to import the downloader-pngs you can drop onto network->downloaders->import downloaders and other places. The program is now ok with different versions of this data, so you can now drag and drop an image from your browser onto Lain and it should still work. You can also do 'copy image' and click the new paste button. Should just be a bit more convenient when you only need to use the file once. I hope to add more export methods too (e.g. export to clipboard bitmap, which you can then paste into a site/discord), so it should be possible soon for downloader makers to share their work direct to a person without either ever touching a file. For advanced users: The colours of the scanbar I put below videos is now editable in QSS! This is a simple test for now, as this was more complicated than you'd think. You should now be able to set the base, border, and nub colour. Check out the example in default_hydrus.qss and give it a go in your own custom QSS files. I expect to move all the other hardcoded colours except for probably tag namespaces and maybe some custom thumbnail borders to QSS in future. full list - times: - if you have file viewing stats turned on (by default it is), the client will now track the 'last viewed time' of your files, both in preview and media viewers. a record is only made assuming they pass the viewtime checks under _options->file viewing statistics_ (so if you scroll through really quick but have it set to only record after five seconds of viewing, it will not save that as the last viewed time). this last viewed time is shown on the right-click menu with the normal file viewing statistics - sorting by 'import time' and 'modified time' are moved to a new 'time' subgroup in the sort button menu - also added to 'time' is 'last viewed time'. note that this has not been tracked until now, so you will have to look at a bunch of things for a few seconds each to get some data to sort with - to go with 'x time' pattern, 'time imported' is renamed to 'import time' across the program. both should work for system predicate parsing - system:'import time' and 'modified time' are now bundled into a new 'system:time' stub in the system predicates list. the window launched from here is an experimental new paged panel. I am not sure I really like it, but let's see how it works IRL - 'system:last view time' is added to search the new field! give it a go once you have some data - also note that the search and sort of last viewed time works on the 'media viewer' number. those users who use preview or combined numbers for stuff, let me know if and how you would like that to work here--sort/search for both media and preview, try to combine based on the logic in the options, or something else? - . - loading serialised pngs: - the client can now load serialised downloader-pngs if they are a perfect RGB conversion of an original greyscale export. - the pngs don't technically have to be pngs anymore! if you drag and drop an image from firefox, the temporary bitmap exported and attached to the DnD _should_ work! - the lain easy downloader import now has a clipboard paste button. it can take regular json text, and now, bitmap data! - the 'import->from clipboard' button action in many multiple column lists across the program (e.g. manage parsers) (but not every list, a couple are working on older code) also now accepts bitmap data on the clipboard - the various load errors here are also improved - . - custom widget colors:
[Expand Post]- (advanced users only for now) - after banging my head against it, I finally figured out an ok way to send colors from a QSS style file to python code. this means I can convert my custom widgets to inherit colours from the current QSS. I expect to migrate pretty much everything currently fixed over to this, except tag colours and maybe some thumbnail border stuff, and retire the old darkmode - if you are a QSS lad, please check out the new entries at the bottom of default_hydrus.qss and play around with them in your own QSSes. please do not send me any updates to be folded in to the install yet as I still have a bunch of other colours to add. this week is just a test--please let me know how it works for you
- misc: - mouse release events no longer trigger a command in the shortcuts system if the release happens more than about 20 pixels from the original mouse down. this is tricky, so if you are into clever shortcuts, let me know how it works for you - the file maintenance manager (which has been getting a lot of work recently with icc profiles, pixel dupes, some thumb regen, and new audio channel checks), now saves its work and publishes updates faster to the UI, at least once every ten seconds - the sort entries in the page sort control are now always sorted according to their full (type, name) string, and the mouse-wheel-to-navigate is now fixed to always mirror this - improved some 'delete file reason' handling. currently, a file deletion reason should only be applied when a file is entering trash. there was a bug that force-physical-deleting files from trash would overwrite their original deletion reason. this is now fixed. the advanced delete files dialog now disables the whole reason panel better when needed, never sends a file reason down to the database when there should be no reason, disables the panel if all the files are in the trash, and at the database level when file deletion reasons are being set, all files are filtered for status beforehand to ensure none are accidentally set by other means. I am about to make trash more intelligent as part of multiple local file services, so I expect to revisit this soon - the new ICC Profile conversion no longer occurs on I or F mode files. there are weird 32/64 bit monochrome files, and mode/ICC conversion goes whack with current PIL code - replaced the critical hamming test in the duplicate files system with a different bit-counting strategy that works about 9% faster. hamming test is used in all duplicate file searching, so this should help out a tiny bit in a lot of places - . - boring cleanup: - cleaned up how media viewer canvas type is stored and tested in many places - all across the program, file viewing statistics are now tracked by type rather than a hardcoded double of preview & media viewer. it will take a moment to update the database to reflect this this week - cleaned up a ton of file viewing stats code - cleared out the last twenty or so uses of the old 'execute many select' database access routine in favour of the new lower-overhead and more query-optimisable temporary integer tables method next week Back to multiple local file services. A widget to select multiple services in file search, and I'll start on a trash that can handle deletes from and undeletes back to multiple locations.
(44.51 KB 506x649 boot error.PNG)

>>17284 This version won't boot for me. It looks like it's failing to convert my database because it can't find the table "main.current_files." Reinstalling 470b seems to work.
(54.31 KB 954x369 hydrus.png)

V470 and v471 fail to open Help links in browser. All links under the Help menu won't work. I mean, the browser won't even know a new URL has been requested. If I switch the default browser for others, the problem is still there. ---- v470 and v471 tar.gz executable packages.
This file crashes hydrus for some reason. It doesn't even write anything back to the log file.
Also, webp doesn't animate properly. That might just be because of how much of a clusterfuck the format is though.
(145.54 KB 770x700 Peek 2022-01-28 04-33.gif)

>>17288 It works fine in my Linux client.
I have my media files, db, and thmbnails in seperated locations. Am I able to just backup the db, and not touch the media files and thumbnails? The media files won't ever fuck up from an update or something right? It's going to be the db I presume. So I should be able to reinstall hydrus, give it my db backup, and just point it to the media files, and it should be able to rebuild everything right? I have like 2tb of hydrus media and growing, if I have to back that up it's such a waste of space for no reason.
I created AFTbooru https://booru.allthefallen.moe/ downloader/parser/tag importer for Hydrus Network. Now it can correctly work with individual images, galleries, watch posts for updates, etc. Do you want me to share it here? Is it a good idea to include it into the distribution?
>>17291 You could only backup the database, but then if your media files got fucked you would be as well. If you don't follow the 3-2-1 rule of backups you have absolutely no right to complain about data loss. Also consider pruning your archive as well. I find that lots of files that aren't that good end up in the archive after a while, deleting them saves a good chunk of space. Doesn't work if you're autistic about archiving fucking everything though.
>>17190 Anon who walked me through data corruption/ ddrescue/ testdisk/ SystemRescueCd, and more, if you're still here, can you tell me how to restore a ddrescue .img file onto a hard drive? I've been using ddrescue to make weekly backups of my secondary hard drive, which was fine so far. But my replacement hard drive I'm using as my boot drive needs to be returned, since it has "Interface CTC Errors" (>>17271) according to "HD Tune Pro" (I've since spoken to the seller I bought it from, and they had me try "HDDScan", which also showed the error, except it was worded as "UltraDMA CRC Error"). CrystalDiskInfo doesn't actually tell me the hard drive has such an error, so I'm glad the seller didn't tell me to use that program to check. During the downtime of my returning my replacement hard drive + getting a new one, I will be using my old hard drive that corrupted my data in the first place. I don't want to, but, I don't know. When I made a ddrescue image of it, it didn't say it had any bad sectors, so I think it's just susceptible to corrupting data. I'm currently using ddrescue to create an image of my replacement boot drive, to put it on my old hard drive that corrupted my data in the first place. It's a waste, because with my hydrus client.db being lost forever, the only changes I've made on the HDD since then have been under a small handful of programs that can easily be isolated and backed up. I'm only really imaging the hard drive for the convenience of not having to install a new OS. But I'm still on windows 7, and it suffered 1GB of random data loss. Ideally I'd just be installing linux instead, and then just copying over the settings/profiles for the handful of programs I use (plus my hydrus eventually if I can be bothered). But I know nothing about linux. I don't even know which one to choose. Even if I knew how to restore my ddrescue image to a hard drive, it might not even work, because in trying to find out how to do so, I read that hard drive sizes might not have the same amount of sectors. So if my replacement boot drive has more sectors than my old hard drive that corrupted my data, my ddrescue image of my replacement boot drive can't fit on my old hard drive. So then installing linux would be my only option. All this context doesn't matter. If you're still here, can you tell me how to restore a ddrescue .img file onto a hard drive? Thanks. Sorry to keep asking you for help weeks apart.
>>17293 >Doesn't work if you're autistic about archiving fucking everything though. That's called data mining. :p
(224.04 KB 1280x640 2 what if i told you.jpg)

(898.04 KB 480x480 1 becoming fully redpilled.mp4)

>>17294 >Ideally I'd just be installing linux instead >But I know nothing about linux. Then don't make the switch right away, you will run into newbie troubles. First of all, you need an acclimation of a few months before you gain confidence enough, and it is not because is difficult, but because there are so many different distros, so many different packages, so many choices, that you will get confused and frustrated after a while and return to the same Windows prison from you came from. To my knowledge, only those users that recognized the enemy and their sleaze ways can transition successfully. And no matter how easy to use or the amount of eye candy you are enticed by the corporations, because once redpilled there's no turning back.
>>17296 Thanks for the fair warning. It's hard to find the motivation to learn linux, even though my current windows 7 install suffered random 1gb data loss, and my OS needs to be reinstalled. The last time I bitched about realizing porn didn't make me happy, someone called me autistic or something. But still, I wish I could have realized my lack of ambition without first having suffered data loss. I felt scared when my hard drive got corrupted to the point of not properly functioning. I thought if I remembered how the porn made me feel, it'd give me the confidence to save it. Instead, it made me mistake my fear for confidence. So I made the data irrecoverable, due to acting from fear in my tech ignorance. I don't bring up the realizing-porn-doesn't-make-me-happy thing again just to get bullied again. The point is this really sucks. I wish I was on an operating system that didn't suffer random 1gb data loss. I wish I didn't have to drag my no-ambition self through learning a new OS. The present sucks for me. The future sucks for me. This shit is grim. I at least wish I knew I could backup my four .db files in hydrus. I thought my only option was backing up all my media, which I could never afford. I only lost my client.db, but it's still so bad. Everything sucks for me. At least I have a path forward. But I still wanted to bitch I guess.
I'm not getting any pixel dupes when I search for non-dupes on the latest version, but I got this pair to show up when I was searching for pixel dupes only.
>>17294 I would still like to learn how to put this image on a hard drive, but I think I'm out of time. I still have to secure erase the disk, which I don't know how long that will take or how necessarily to do that. From a quick google search though, I can boot from a usb that has "DBAN" to do it, which I'm hoping will only take two hours to overwrite a 2TB drive using "PRNG" with one pass (from a quick google search, I read a single comment saying that "PRNG" is "theoretically" better than just a single pass of wiping with zeros, and is just as fast). Also I just googled "cygwin dd" and found that it appears you can just straight run dd from inside cygwin, so I can just google for a a dd command that would work inside linux to use within cygwin. I wish I thought of that before I went to bed. I found this article: https://www.makeuseof.com/tag/easily-clone-restore-linux-disk-image-dd/ Which has this command: dd if=path/to/your-backup.img of=/dev/sdX I assume it will work fine. The only problem I have left is time.
>>17294 >>17299 Hey, it's me again: the anon who made his corrupted data irrecoverable in my tech-ignoant panic, and lost my client.db and 942 files in my hydrus as a result (I never mentioned that I learned I lost 942 files until now, but I checked the amount of lines missing from one of the two hashes things hydrus dev taught me to produce, and that was the result). My replacement hard drive that I was trying to return due to increasing "Interface CRC errors" couldn't boot after shutting it down. I actually bothered to check my old hard drives in the hopes of one being an old copy of my boot drive (unfortunately, it turns out I don't have such a thing). But I was able to boot into a Windows 10 drive that doesn't belong to me but I had on hand, and my replacement hard drive showed up as completely blank despite being unencrypted with (what until I shut it down was) a working* Windows 7 OS on it, as well as my data. *working despite having suffered 1gb of random data loss due to my rendering the corrupted data on it irrecoverable I am currently typing this with my thumbs on mobile. I am currently running chkdsk on my old HDD that corrupted my data, to boot from and use it instead until I get a new replacement hard drive. Chkdsk tried to fix the replacement hard drive first (which is to say, only when booting from my old hard drive did I receive the prompt for chkdsk; when booting from the replacement hard drive itself, it only returned like a 000f error and asked for a repair CD), but chkdsk said the master boot loader or something is corrupted, and it failed to repair it. I didn't pay attention to chkdsk the first time I ran it after rendering my corrupted data irrecoverable, but it has the filetypes of all the files it's deleting the "index entries" for, due to their being irrecoverable. I spotted some .mp4 files being deleted. I still don't know the tags for the files I'm missing from my hydrus, nor even do I know how to output the differences in the two hash files I produced to even know what's missing. But losing videos is most likely to be data I will never see again in my lifetime. Thankfully a large chunk are twitter videos that are rubbish. But some are huge videos I manually saved from porn sites, some of which I know are otherwise lost forever. Obviously any media of any filetype is potentially lost forever if you lose your only copy and have to try redownloading it again potentially years later. But video especially, it's basically hopeless. I don't know. Again, I just wish I knew I could've backed up my four .db files to still have a bootable hydrus. Again, I just never read anything behind a title of just "backups", because I thought the only option was backing up all my media, which I could never afford. Something bad had to happening to my hard drive first was the only option for me. If I at least had backups of my four .db files, I wouldn't have been so scared and acted from fear in my tech-ignorance, which made the data irrecoverable. My main fear was not so much losing any random files, but losing the tags for everything. I was afraid of only having an unsorted hoard that could never even be trusted to be a complete archive of anything, due to its suffering random data loss. I wish it were phrased as "backup your tags", or something. Something limited scope. Anything under just "backup" I assumed was the same as the countless times people out of context told me to backup, such that it was backing up everything. I could never reach that reality. I also wish I had the thought to educate myself beforehand on what to do after a bad thing happend to me. But no one ever talks about that, even when they share their data loss horror stories: they always end on the note of out of context "backup". So when my data got corrupted. but I could've saved it had I educated myself even at the time, instead I freaked out and rendered it irrecoverable. I wish so many things were different. Sorry if this is a long post that's rambly and formatted like trash; I typed it with my thumbs and can't do any major reformatting.
>>17294 Hey! Wasn't the last couple of weeks, for personal reasons, so if there is anything I have missed, please link or mention them. What >>17296 said is absolutely true. Don't try to run linux right now - at all. It's not like windows, you won't be happy with it without trying and getting used to it's way of doing things first. Even on linux - there are lots of things that can and will go wrong and if you are not prepared for them, you will lose data again. Also, there is the added "bonus" that running a some command can kill all of your data irrecoverably - and lots of people try to trick you into running them. Especially in the learning phase, you will nuke your stuff - Linux gives you the tools to make something great, but these very same tools let you fuck up greater than you could even imagine on windows. DBAN will not do a secure erase, but will do a good job killing all data on a drive regardless. It will take a long time to complete however. dd command is good (make sure it's /dev/sd<your drive letter>, not /dev/sdz1 or something like that - check it with lsblk if running systemrescuecd!), but please add these - bs improves performance, conv makes sure everything is on disk and status prints progress: dd if=path/to/your-backup.img of=/dev/sdX bs=1M status=progress conv=fsync >>17300 What it said is probably a broken master boot record (MBR) - the first part of your drive that contains information about where all of the partitions are. If this happened with two different drives now, get rid of the mainboard and RAM, clearly something else is broken in your computer... Also, yes, backup everything. Because otherwise, you will try to restore your backup and notice something is missing and is now gone forever.
>>17301 Hi, thanks for replying. I was going to try replying from my old hard drive, but I need to shut it down to boot DBAN from usb to wipe the replacement drive anyway, so I'm still typing this using my thumbs on mobile. I don't know what you mean by: >DBAN will not do a secure erase, but will do a good job killing all data on a drive regardless. It will take a long time to complete however. But I hope it will kill the data as you say, which is enough, since I will be returning the hard drive. Also I have more time than I originally thought, so only having the option to do one pass is no longer a constraint. The only things that happened to my situation since we last spoke was hydrus dev teaching me how to turn my hydrus tags into a .hta file to import, and produce two .txt files with the hashes of my files pre and post data loss to find the differences (which I don't know how to do yet). I don't know the tags of the missing files yet. Also my replacement hard drive needs to be returned due to increasing "Interface CRC Errors", plus on reboot, it started showing up as a drive with nothing on it in windows Like, chkdsk finished on my old hard drive, and after booting from it, my replacement drive shows up in explorer as if it were encrypted, except it is in fact unencrypted (also encrypted drives have different prompts in different places, and don't trigger chkdsk on boot. The replacement drive just has something wrong with it). After chkdsk finished I got to see the error it showed for the replacement drive again, and it said it was missing the "master file table" I believe. Also thanks for amending and clarifying the dd command to put an image onto a hard drive, but I don't know if I trust the image enough to do that after all, since upon shutting down, the very hard drive I had just imaged can't boot or even be browsed in explorer when not booted from. In my tech ignorance I assume the image might behave the same way. Maybe data got corrupted past the original corruption that I didn't even know about. >What it said is probably a broken master boot record (MBR) - the first part of your drive that contains information about where all of the partitions are. If this happened with two different drives now, get rid of the mainboard and RAM, clearly something else is broken in your computer... >Also, yes, backup everything. Because otherwise, you will try to restore your backup and notice something is missing and is now gone forever. The first hard drive that corrupted my data doesn't show any errors in "HD Tune Pro", even right now. But the replacement drive had two "Interface CRC Errors" the first time I checked, and has since gotten a third. Then of course today, upon shutting down, the replacement drive became "raw" is what I assume the term for it is, but I am just blindly guessing. The issues aren't exactly the same, but if I can check my ram and mobo health, of course I should. Also thanks for the warning about linux. I didn't know it would have been that irresponsible of me to completely depend on it as my only OS as someone with zero experience with it.
>>17301 I'll run DBAN overnight on my replacemant hard drive to be returned, instead of doing so right now. I'm typing this from my old hard drive that corrupted my data, that I will use until I get another replacement, this time one that doesn't show errors (or stop working upon shutdown). The reason I'm replying to you again is to say that for some reason, I can't reinstall veracrypt without first installing KB3033929 and KB4474419. It says I "SHA-2 support missing from Windows". I was able to install veracrypt before after putting an image of this very hard drive onto another hard drive. I don't know if a new veracrypt update started requiring that since then, or what. But it's inconvenient to me, I guess.
>>17303 I just googled the two KBs veracrypt says I'm missing, and it says the first is already installed, and when I try to run the second, it says it encountered an error with a code, and has reached the end of the file. Maybe I can use testdisk to copy the veracrypt install from my replacement hard drive that turned "raw" (I'm blindly guessing is the term for it) on shutdown. I don't know why I can't install veracrypt on this HDD, when a few weeks ago using an image of it I could.
>>17304 I used testdisk to copy the veracrypt "program files" folder to my old HDD (which I'm currently using), and trying to run it says "Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item." There's even a "VeraCrypt Setup.exe" in the same folder that shows the same error. I don't know how this can happen when, again, the image I made of this HDD weeks ago could reinstall veracrypt no problem. Rendering my corrupted data irrecoverable was one of the worst things that ever happened to me. I hate this. I wish it was just all over already.
>>17269 Thanks. If you are interested, the actual params here seem to be a mix of this structure: https://github.com/Nandaka/PixivUtil2/blob/master/PixivConfig.py#L173 I think gif (and ugoira) spoiled us with variable frame timings. Modern formats just don't seem to be too interested in it. Apng is supposed to support it though, and is also lossless: https://wiki.mozilla.org/APNG_Specification#.60fcTL.60:_The_Frame_Control_Chunk But I am not sure if there is any way to inject those timings via ffmpeg. The last time I looked at this, I think I was unsuccessful. >>17270 Thanks, I had no idea the 'original image' is not the true original. Fingers crossed, in a few years I'll have duplicate video detection and limited auto-dupe-resolution working, so we'll be able to spread tags across common dupes of videos and also auto-negotiate our way to the highest quality vid.
>>17305 I'm thinking your Mainboard/RAM is dying, which is why you you get some weird errors like this. Maybe it's not really the drive, but your Mainboard/SATA cable that is messing up those reads/writes to your HDD? Would explain why both the old and the new drive failed. Also, dd_rescue is not really a backup in the traditional sense, since you don't really need to save empty space on a hard drive. Makes sense that you don't trust the image, so maybe try to get that SATA cable/Mainboard replaced first? If you can run DBAN somewhere else, maybe you should try to run http://www.memtest.org/ overnight on your computer to exclude the RAM as a problem source? >>17302 Secure erease is a procedure in the SATA standard - you can give a drive the command to "wipe itself", which then causes the drive to... well, delete itself. DBAN will just overwrite your drive with random garbage, which should be good enough and depending on paranoia, even better than the secure erase command, albeit a bit slower. It will not issue an erase command to the hard drive, since implementation is not always a given on older drives and it can be buggy too. If you can, I would suggest you stop using the computer with the potentially bad mainboard entirely - maybe see if you can get a replacement machine somewhere else (you had one last time, right)? If you must, I would urge you to at least stop using your drives with it - maybe get a linux live cd and browse the internet with that until you get a new computer/mainboard/cable?
>>17288 can confirm. i'm on 465 on windows.
>>17272 I am glad you asked, because (as of a couple weeks ago) now you can! Make sure help->advanced mode is on. Then open a new search page and click on the search tag input box. In the dropdown, click where it says 'my files', and you'll have a menu. There should be an entry for 'deleted from my files', which is my new deleted file search domain. You can search it like any other place (only real difference is the autocomplete tag counts will be in an unknown range from like (0-17)), and it will give you 'empty' file thumbnails since you no longer have the files. You can mass-select and right-click those thumbnails, once you find ones you want, and then you can known urls->copy->gelbooru post urls or whatever it is you want to download, which you can then paste into a new 'urls downloader' page. One thing you will want to do on that downloader page is click 'file import options' and make sure 'exclude previously deleted files' is off. Let me know if you have any trouble. That deleted-files search is new code and may have a couple bugs for weirder searches.
>>17275 Thanks, I am glad you like it! >>17278 Hydrus is kind of like shimmie (or any other booru) in that it tracks files and gives them tags and ratings and things but it is a PC application run on your local computer that only you use. If you want to host a website and have many users contribute, then shimmie is great. If you want to catalogue your personal collection quickly and efficiently and with lots of power tools, then hydrus is great. >>17289 Thank you for this example. I am waiting on PIL or ffmpeg to support animated webp decoding, and then I can add it. Last time I looked, ffmpeg can encode animated webp, but not decode (lmao). I'm all ready otherwise.
(14.16 KB 327x297 Untitled.png)

I would love a menu that lists all your pages, like the menu in chrome that lists your tabs. Makes it much easier to look through all your pages. I know that the page of pages exists, but I still think this would be useful.
>>17286 Thank you for this report. I am very sorry for the trouble. I do not know what is going on here. This sounds odd, but have you ever done any manual statistics work with the database, for instance setting up a VIEW or VIRTUAL TABLE or using another program to access or do metrics on the database? I've never seen the 'error in view test' error before. As you can see in the error, I am renaming a table in the update code, and it is complaining about a table that hasn't existed in several months. It feels to me like there is a legacy VIRTUAL TABLE or something in your database that is linking file_viewing_stats to current_files, and when I try to rename a component of the virtual table, it is sperging out because a different component no longer exists. (Make a backup before doing any of this!) Please go to install_dir/db and run the sqlite3 executable. Then copy/paste these lines one by one: .open client.db .once my_tables.txt select name, type, sql from sqlite_master; .exit You can pastebin it and link it here if you like, there shouldn't be anything too private in there. Or you can just look yourself. Do ctrl+f for 'virtual table'--is there anything? How about ctrl+f for 'current_files' (not "current_files_x")? Is 'current_files' in the sql anywhere? If there is, then note down the name of the virtual table. Let's call it 'vtable_1'. Then do this: .open client.db drop table vtable_1; .exit It might be 'drop view' instead, for a VIEW. And try updating again. Fingers crossed, that's your problem sorted. I don't think I have ever made a VIEW or VIRTUAL TABLE in hydrus code, so I have to guess a different program put this in here somehow. Maybe a view called 'test'? Let me know how you get on. This is a weird problem, and I am not confident my solution here is what is actually going on. Also, if it is a virtual table, I don't mean to pry, but I'm curious what happened.
>>17287 Thank you for this report. When you say if you switch the default browser, is that on hydrus's end, or your OS's? I've had big problems trying to tell OSes to just 'open this link with default browser', so if you hit options->external programs, I have a line there to put in a manual browser launch path. This tends to fix most of these problems. Let me know if that doesn't do it. >>17288 >>17308 Thank you for this report. That apng imports ok, and renders ok in my native player (which means ffmpeg is ok with it), but it seems to crash mpv! (windows build uses libmpv api version 1.109, although I forget which actual version that is) Other apngs like picrel are ok in mpv. I don't have a great solution to this, but next time we roll in an mpv update, hopefully it fixes things. I'll make a job to give the newer versions a look.
>>17291 Yeah, you can just do the db files. The updates I roll out don't touch client_files etc..., and if they ever did, I'd warn about it for several weeks and then say loudly in the update and so on that I was about to do a big reformat. I strongly encourage you to maintain a full backup of everything you own. Not just hydrus, but your documents and digital family photos and everything else. I once lost a drive that had about 75,000 files in it, it fucking sucked big time. However if you lost part or all of your client_files but have your db files, then you can technically 99% reliably recover anything that you previously downloaded, since they will have a URL in the database and I have a maintenance routine to redownload such missing files. Anything without a URL will be basically lost though, since you'll only have its hash. I guess it depends on how much you value your time. Doubling your storage so you backup your whole system can be about $50-100 for something like a 4TB WD passport usb drive, and then it is ten minutes a week for a FreeFileSync run. Losing all your shit results in days of stress putting it back together and then months of downloading to recover everything. It has a much higher total cost than that, but it may never happen. I am happy to spend the money now, as you would insurance, it gives peace of mind too, but if you are strapped for cash, then yeah just backup the db files. If you have a couple hundred bucks for 'oh fuck, oh god, no, fuck' insurance, I strongly encourage you to sort out a proper 100% take backup. We are all digital people, don't fuck yourself for no reason. I am biased though, since I have the scars of this going wrong and I also deal with plenty of users who didn't have a good backup and are now facing the prospect of trying to recover.
>>17292 This is the main clearinghouse: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/ https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/ATF%20(AllTheFallen) If you would like to upload your update there, that'd be great. You can also post it here and I'm sure it will end up there one way or another. Thank you for your work! I think ATF is a little spicy for me to include in the defaults. >>17298 Thank you for this report. I am afraid I do not get this problem. When I say 'must be pixel dupes', I do not see that pair, and when I have 'can be' or 'must not be', I do see that pair (as you would expect). If you hit database->file maintenance->manage scheduled jobs, how many 'calculate file pixel hash' jobs do you have? Can you do a database test for me? Close the client, go to install_dir/db folder, and run the sqlite3 executable. Then copy/paste these lines one by one: .open client.db attach "client.master.db" as cm; select hash_id, pixel_hash_id from pixel_hash_map natural join hashes where hash = X'5c6e20aa40daff0ec0d374dc5934f33caf99047c8525a233a1acfb1c4a3acbb5'; select hash_id, pixel_hash_id from pixel_hash_map natural join hashes where hash = X'796a90c8bdfdc221d1ccea98e2bb969489b99e63b338f9f1c0961e3f2a94224f'; select hash_id, pixel_hash_id from pixel_hash_map natural join hashes where hash = X'0712999731d8e008b5f977ede1ca4f9080ae738bb947af1b05d01aecbe531651'; .exit Those SELECTs may give no result or one result in the form "x|y". If they give results, none of the numbers should be the same, either within or between rows. Let me know what you get.
>>17311 Thanks. I think this is doable. How about if I make a list on any right-click menu on the tab bar--it has submenu called like 'pages', and then under it is the page title+num files summary (like you get in undo menu) for every page on that level of page of pages? So if you click on greyspace on the top level, you see all pages in left-to-right order, but if you click on a specific page of pages tab or on its lower level greyspace, you just get that page of pages' pages? If you click on any entry in the menu, it'll take you to that page, let's say.
>>17312 >have you ever done any manual statistics work with the database, for instance setting up a VIEW or VIRTUAL TABLE or using another program to access or do metrics on the database? I have. I made a couple views a long time ago to graph total files over time that selected from current_files. I dropped them and now 471 booted and updated the db. Thank you.
(63.45 KB 1920x502 document list.PNG)

>>17316 That sounds sweet. Definitely will make it easier to jump around in a big page of pages instead of having to scroll along the tab bar or use the little arrows at the side. I actually just had another idea - what if it was like the document list in notepad++, where you can enable it and it's always there at the side? This might be harder to do though.
(106.90 KB 1920x1080 4851.jpg)

>>17301 >Linux gives you the tools to make something great, but these very same tools let you fuck up greater than you could even imagine on windows. This is true. It might be the equivalent of to give a machine gun to a monkey.
>>17307 You weren't wrong about DBAN being slow. I removed my primary hard drive after booting DBAN from usb and seeing horrifying no-confirm all-connected-device-wipe warnings, and did default settings, which was 3 passes with a crazy remaining time of 300 hours. So I held power on my laptop to force shutdown, and restarted on the PRNG wipe option with 1 pass. Still 300 hour estimate. The write speeds are 5,000 kb/s. 46 minutes in, it's 0.75% completed. 290 hours remaining. I will have no option but to return the drive tomorrow having only done like 10% progress on this wipe. It also took literally 24 hours to create a ddrescue image of it, when my original old hard drive that corrupted my data in the first place was also a 2TB 5400 rpm drive, but only took 10 hours. Also I remember reading (skimming) the wikipedia page for ddrescue and seeing it clarify that the ddrescue with an underscore in the name is different from the ddrescue without. The one in cygwin was ddrescue with no space, or underscore, or anything. I know you know that already, since you were the one to put me onto ddrescue under cygwin, but still.
>>17307 DBAN had much less progress than I expected. 9h34m in, 4.59% progress. 9,730 kb/s, 162 hours remaining. I hope I can boot from my primary drive and do a "secure erase" instead that will hopefully be done in an hour or two. But one thing that occurred to me too late, since I already re-bought the same drive, is that it's a 7mm drive, when all my past HDDs have been 9.5mm. Is it possible for that to be the reason for the "Interface CRC Errors", and the entire hard drive eventually showing up as "raw" (I'm blindly guessing is the term for it) on shutdown? As well as the read/write speeds being 5-10 mb/s in DBAN. If so, hopefully it happens again soon so I can make the same complaint with proof, return it, and buy a 9.5mm drive instead. I will run memtest too, but wiping the drive and returning it is more immediate for me.
>>17321 secure erase might be faster, can't say for certain without knowing the drive internals, but if it's an SMR drive, speed can dip like that. I don't use secure erase personally, and at work, we just stick them into an old storage server and secure erase them in batches of 24, keep them there for a week - so I can't tell you how long it actually takes. No clue what you mean by raw, unless you mean "unformatted" or something along those lines? That should mean that the partition table is gone - really bad news, but if only the partition table is gone, that can be recovered using testdisk. To be honest, I get the feeling something is wrong with your mainboard/SATA controller, but that is just impossible to verify without testing. Maybe you can pickup a cheap desktop computer and see if the errors still happen on that as well? Alternatively, stick your replacement drive into your computer and run some program that writes a predictable pattern to disk, reads it back and reports errors. If it is a bad controller or cable, it should spit out something wrong fairly quickly (at most a couple of days). I don't have any software recommendations for windows, but I'm sure some software you already tried can do this. Can you talk to the seller to tell them that you need to format the drive first? Maybe say that there is still important data on the drive, You tried to copy it and it takes a bit longer than expected, due to the issues the drive gives you? Surely they will understand and extend the deadline until the drive has been wiped.
>>17322 >Alternatively, stick your replacement drive into your computer and run some program that writes a predictable pattern to disk, reads it back and reports errors. I mean the drive you get after this - one that *should be* working perfectly. >>17319 >This is true. It might be the equivalent of to give a machine gun to a monkey. I wouldn't say that. I would compare it to some kind of surgery - you have very precise tools, with which you can cure a patient or kill them instead. You need training and experience to master those tools, and if you don't have enough practise, things might go south rather quickly. However, you really need to have these tools, otherwise you can not do your job correctly. While there are some people out there with lots of talent, all of them still need experience. The nice thing about computers is that they are easily "resettable" - so if you have a good backup strategy, you are golden no matter how badly you fuck up. And, of course, it's way easier to use a computer than cutting people open...
>>17322 >No clue what you mean by raw, unless you mean "unformatted" or something along those lines? That should mean that the partition table is gone - really bad news, but if only the partition table is gone, that can be recovered using testdisk. It showed up in windows explorer as if it were encrypted, I think. I've never actually right clicked an encrypted drive in explorer before mounting it in veracrypt, but on this replacement drive that turned this way on shutdown, explorer claimed it had 0 bytes of data on it. >Can you talk to the seller to tell them that you need to format the drive first? Maybe say that there is still important data on the drive, You tried to copy it and it takes a bit longer than expected, due to the issues the drive gives you? Surely they will understand and extend the deadline until the drive has been wiped. It was actually an amazon purchase, and I was lucky I could even return it by the end of thr month (the deadline), because I bought it the day my old hard drive corrupted my data (November 17th). If it were a strict 60 days return timeframe, it would have already been over by the time I thought to return it. I couldn't figure out how to secure erase the drive anyway. I watched like three different youtube videos in 2x speed that had "secure erase" or similar in the title, and they all recommended DBAN or seemingly freemium windows programs. It would make me think "secure erase" isn't a distinct thing had I not seen it explicitly mentioned on wikipedia under I think the "nwipe" wikipedia page, which I found under the DBAN wikipedia page. It had a single line saying something like "4 out of 8 manufacturers were found to have not implemented secure erase properly" or something. DBAN is currently 9.34% done the same PRNG pass. 17,993 kb/s (like the highest I've ever seen it at). 83 hours remaining. Again, I just didn't even know how to secure erase even if I wanted to try, so I thought I would just maximize the single-digit percent of DBAN until the hour I return it today. I guess I will try a little harder right now for my own sake. I can try running memtest tomorrow. I don't have access to another computer though, to rule out "mainboard/SATA controller" problems. I don't know.
>>17323 kind of outdated a modern kernel already does way too much for you, precise is the wrong word to use today, there was a time when you could trigger segfaults by just setting your screen resolution one pixel too big which would overflow the framebuffer leading to pagefaults leading to completely random parts of memory being overwritten, if you try to overflow the framebuffer on a modern kernel like with cat /dev/random > /dev/fb0 it would just fill up the screen with static and spit out an error saying device full before the overflow happens, the segfault doent happen even when it should because you never really communicate directly with hardware anymore, modern kernels have moved from taking instructions to taking suggestions, pretty much the same mentality as a compiler, "dont tell me what to do just tell me what youre trying to do and Ill figure it out cause we both know youll just fuck it up"
(13.02 KB 595x376 Untitled.png)

>>17301 Anon, sorry for my being completely incompetent, but will you please spoonfeed me why this didn't work? I imagine it has to do with there being no yellow text to show you're in the output directory like last time >>17003, but I didn't want to do things I know nothing about without being told to do so. I also just went back and saw that when you were walking me through using ddrescue to produce the image, there was a space between the "/dev/sdx" part and the output name. So I would think to do that for the input in this case. But, again, I don't just wanna add spaces to commands on my own when I don't know what I'm doing at all.
>>17325 Depends on how you look at it, I think. Sure, stuff like direct access to memory is no longer possible on most distributions (hell, on CentOS you have to modify SELinux permissions just to have apache listen on something other than port 443 or 80). Or you no longer have to calculate your partition sizes manually, or run iosetup before mounting an ISO file, since mount now does all of that stuff for you. But, at least for me, right now is a bit too "managed" for my tastes, but still acceptable. I don't want systemd-networkd managing my interfaces (/etc/sysconfig/network-scripts worked very well!) or nonsense like that, but this seems to be the way everything is headed. Gone are the days where your "unit file" is just a set of shell scripts you could easily modify if you so chose to. Now, everything has an API, so you can abstract all of the pesky configuration away. And all of those management interfaces really don't give you the same amount of control as you had before. Hell, there is a tool for managing your GRUB configuration called grubby, because RedHat thought that the grub environment block is the right place to store boot configuration. Want to have a good old-fashioned grub.cfg with your own shell script in it? Good luck editing these entries without grubby! However - compared to windows, stuff like "easy" access to GPT and MBR records or reading metadata about disks is still quite valuable. I for one never needed to access anything like the framebuffer directly. But yeah, I also see that lots of stuff gets removed for "security" and "user friendliness". I guess it's only a matter of time before some of the good stuff gets axed, as well, unfortunately. >>17326 There is no "/dev/sdc2/drive.img" - you have to "mount" /dev/sdc2 if you want to access the files on there. If you will, /dev/sdc2 is just a really big "file" containing all the 1s and 0s of the second partition of /dev/sdc. If you have your drive image in explorer, it should be in /cygdrive/<drive-letter> (Cygwin emulates your windows drives there). That's why these commands are dangerous, you don't deal with files, but with ones and zeros directly. I haven't read up on anything you did, but I hope you have a backup of all of the disk you are attempting to overwrite right now (/dev/sda)! Yellow text is the directory you are currently in. I was in /cygdrive/e in the screenshot, so in E: in my windows. So you will probably want to do cd /cygdrive/<letter>/ and ls (it should show you a drive.img file) and then you can do dd if=drive.img of=/dev/sda bs=1M status=progress conv=fsync. But really make sure that the drive is correct (e.g. unplug it, check /proc/partitions, plug it in again and see which entry just got added!). The drive may change letters (you re-plug it and it's no longer /dev/sda, but instead /dev/sde, for example), but make sure you get the right drive! And if it did change name, use the current one. I'll be gone now, so please, if you're unsure, leave everything as is.
(41.76 KB 1065x656 1.png)

(46.26 KB 1007x623 2.png)

(28.12 KB 875x552 3.png)

>>17313 >Thank you for this report. When you say if you switch the default browser, is that on hydrus's end, or your OS's? In my OS. I meant LibreWolf, Palemoon, Waterfox, Falkon. >I've had big problems trying to tell OSes to just 'open this link with default browser', so if you hit options->external programs, I have a line there to put in a manual browser launch path. This tends to fix most of these problems. Let me know if that doesn't do it. Okay, I just did it but fails with all browsers I tried. See pics.
>>17327 Thank you anon. I haven't done anything yet, but "sda" in my screenshot is my second replacement hard drive. It just came out of the package and has nothing on it; it doesn't even show up in windows explorer. I can't really unplug and replug the drive at my leisure, because I only have a laptop that can fit a secondary hard drive. I have to shut it down to take out or put in a drive. But, I somewhat understand (I didn't immediately understand what you meant by: >If you will, /dev/sdc2 is just a really big "file" containing all the 1s and 0s of the second partition of /dev/sdc. >That's why these commands are dangerous, you don't deal with files, but with ones and zeros directly. But using "ls" to check the .img file is there seems to be enough. The output drive doesn't even have a windows drive letter in my screenshot. I don't think I can fuck it up now with your help. Thanks again. Take care.
>>17329 Responding to myself to say that the dd command just finished, but the disk still doesn't show up in windows explorer, and in "disk management" (default windows program), it says the entire disk is "unallocated". When I look at it in testdisk, it sees the contents of the image on it. But there is no way I will be able to boot from it in this state. So far it's behaving the exact same way as the dying hard drive I had to return. This is really annoying. Again, for some reason I can't install veracrypt on my old hard drive that corrupted my data, but I was able to install it on an image created of that hard drive, when put on another hard drive. This image that shows up in windows as "unallocated" had the veracrypt install. I already tried copying the veracrypt program files folder and putting it on the old hard drive that can't install it for some reason, and it doesn't work. When it rains it pours I guess. Murphy's law. It wasn't enough for my hard drive to have corrupted my data, which I rendered irrecoverable in my tech-ignorant panic. I now have an even more damaged operating system than before. The less damaged operating system I was using before is now showing up as "unallocated", because the replacement hard drive failed spectacularly in this way.
>>17330 Wow that blows. Sorry dude.
>>17331 Thanks for the empathy. If there's anything to learn from my data loss horror story, it's that you should always make backups of your data while it's still intact. Even if you're in my situation, where 100% backups are impossible, so you can't even cope with learning how to backup, because it's a reality you can't reach, you should at least read the out of context "backup" link on the off chance that there is a limited-scope backup you can do, to preserve your settings/profile, even if the majority of the data cannot be saved in your situation. Even worse down the line, if you can't even do the limited-scope backup (or even if you can, or can even do the full-scope backup), at least educate yourself on how to operate in the event of something bad happening to your data. Understand how to image your hard drive, as the FIRST STEP. In my fear and panic made the mistake of thinking I had to decrypt my hard drive first for an accurate image, because when I had imaged it using "Macrium Reflect" (windows freemium program) once in the past, my veracrypt password didn't work, so at the time I had decrypted it, cloned it, then encrypted it again. When I did that after my hard drive corrupted my data, I rendered all the corrupted data irrecoverable. To say it again tersely: 1. backup your data 2. if you have no hope to backup all your data, at least force yourself to read the out of context "backup" resource anyway, to check if there is a limited-scope backup you can do, to preserve your settings/profile, even if you can't backup the broad-scope data 3. educate yourself on how to operate when something bad happens to your data, so you can just go through the (already educated) motions without thinking, instead of having to trust you will be of sound mind in the moment, because you might not be I have heard a lot of data loss horror stories. But, every single one of them ended in the one tip of out of context "backup". That didn't save me when a full-scope backup was an impossible reality for me. That didn't teach me to check if limited-scope backups were possible. That didn't teach me to educate myself on how to operate in the event of data being compromised. I hope this happening to me can at least prevent it happening to someone else.
>>17332 >Understand how to image your hard drive, as the FIRST STEP. I want to slightly amend this. If I had used a (NON-DESTRUCTIVE) HDD health check program to check for bad sectors, and found none, and just restarted my computer and ran "chkdsk", my corrupted data would have been fixed 100%. Sometimes you can do that. But, for me, because I could reach a backup only after something bad happened to me, I didn't try recovering as the first step, and instead tried copying as the first step. Only, I made the mistake of thinking I had to do one step before copying, which rendered the corrupted data irrecoverable. I just wanted an accurate image of the hard drive, because the only method I knew at the time wasn't accurate, and I was panicking too much to even cope with looking it up. The thing is, the program I used to image my hard drive is actually mentioned dozens of times all over the veracrypt forums, and I even found threads mentioning veracrypt on the forums of the program itself. I even searched generic imaging articles, and they all recommended this one fucking program. People would complain that their veracrypt password didn't work, but I never saw anyone post a solution. So I just figured an accurate copy was impossible, and never thought of it again until my data corrupted. Then you know the rest. It's just an awful situation all the way down. I can wish someone's words gave me hope when I couldn't cope with reading anything behind an out of context "backup" link, because I knew that was impossible for me. But ultimately, I wish I had never been vulnerable. I tried my best, but I still failed myself.
>>17333 More than most people in this world, we are all here bound together, in perfectionism. We want to pass every test the first time. But this isn't always possible. We fail, we fuck up, we make dumb mistakes trying to prevent bigger ones. It happens everyday, and unlike most people we can't just shrug and move on. This need to beat yourself up is understandable. It's how we help drive lessons home. To ensure this never happens again. All I'm saying is your hurting, and I get it. But shit happens bro. Don't get too caught up in maybes. You did the best you could and this time it wasn't good enough. Fuck it bro. Get it next time for sure. Meanwhile take a break. Take a breather. Calm down.
>>17232 >Thank you for this report. I am afraid I do not get this problem. When I say 'must be pixel dupes', I do not see that pair, and when I have 'can be' or 'must not be', I do see that pair (as you would expect). If you hit database->file maintenance->manage scheduled jobs, how many 'calculate file pixel hash' jobs do you have? 200k, that's probably why it fucked up.
>>17314 >However if you lost part or all of your client_files but have your db files, then you can technically 99% reliably recover anything that you previously downloaded, since they will have a URL in the database and I have a maintenance routine to redownload such missing files. Wait I'm confused how exactly does this work? I thought the db just was loosely speaking, symlinks to the client files organized in a way hydrus can understand it. Are you saying that anything that it finds a source url it will grab? if so then yeah, that's a lot but not as much. >I strongly encourage you to maintain a full backup of everything you own. Not just hydrus, but your documents and digital family photos and everything else. I once lost a drive that had about 75,000 files in it, it fucking sucked big time. Yeah unfortunately it sucks trying to backup 20+tb of data on my server. That's why it's on my server, using ZFS. It's SOME redundancy, but yes I know it's not a backup, redundancy is redundancy not a backup. It's just insanely expensive. I have my most important backed up. I plan to eventually do it. But I'm already running out of space, nevermind backup storage lol. It will probably be a couple hundred to upgrade this server, then probably like 1500 or so to build a clone for a full backup which I'd like to do. If you don't have one though, I highly suggest a server. You can build one cheap if you don't need much storage. It will give you redundancy and stuff.
>>17330 Well, you really seem to be unlucky. I would suggest you just reinstall (or try another OS, maybe windows 8.1)? You can probably get the data off the image fairly easily, but get a "stable" OS first. Then, mount the image in systemrescuecd and copy your files over to the new OS drive later. And, you know... maybe try to get a desktop, so that more than two disks fit in? >>17332 Data only exists if it is backed up, remember that and live by it. There are not "limited scope backups". If the roles were reversed and you had only your DB, you would download all files again and I guarantee that at least 10% will end in 404s. And then, you have a URL, probably can remember the file it was representing and feel terrible forever losing it. Data and metadata are equally important, but, if forced to choose, you can replace the metadata with enough brute force by investing time. Raw files are gone forever. >>17336 I was confused by the docs as well, since it refers to files and db-files as "database" (or I might be retarded since ESL). Strictly speaking, you have your data that is managed by hydrus (images and videos) in one location, and the hydrus metadata in 4 DB files. One of those 4 is a cache db and can be nuked (not devanon, take this with a grain of salt!), the others store URLs, tags and tag relationship information, session data, downloaders, ... Oh, and then there are thumbnails, but those can be regenerated. Since hydrus remembers URLs, you can tell hydrus to check all files, if they are missing and hydrus still remembers the URL, it will re-download the file. If I can suggest some software for your server-sync, look into zrepl - https://zrepl.github.io/ - Mirrors ZFS file systems, can do multiple datasets at once, runs as daemon, can keep non-replicated snapshots, good stuff. Also, I just use ZFS on my computer in raidz1. I have an extra disk spare, so raidz1 should be good enough. And then, use zrepl to send he changes to a server with raidz2. Of course, make sure that your computer is specked with enough RAM and a CPU with enough threads (for compression) - ARC is really a blessing.
>>17337 >Data only exists if it is backed up, remember that and live by it. There are not "limited scope backups". If the roles were reversed and you had only your DB, you would download all files again and I guarantee that at least 10% will end in 404s. And then, you have a URL, probably can remember the file it was representing and feel terrible forever losing it. Data and metadata are equally important, but, if forced to choose, you can replace the metadata with enough brute force by investing time. Raw files are gone forever. I wasn't saying to be satisfied with only having your four .db files backed up; I was saying if I knew I could have done that, I would have. Backing up all my media was impossible for me, so I couldn't even cope with reading anything past the out of context "backup" resources in hydrus. My main fear was losing all my tags. If I had them backed up, I would have at least been far less scared. I might have even been able to cope with educating myself first, instead of acting from tech-ignorant panic that would render the data irrecoverable forever. I do think backing up everything is always the best option. But I was saying that my situation could only possible to prevent by giving me hope when i knew backing up all my data was impossible. If not "hope" in the form of educating me of the fact that I could back up my four .db files to save my hydrus environment, at least informing me on how to operate in the event of my data being compromised, so I can just do the actions I already know, instead of trusting that I will be of sound mind enough to educate myself first in the moment.
Two questions: a) is there a simple-ish way to share files over LAN and b) if there is, would it work using a VPN like hamachi or similar? My ISP doesn't allow port forwarding for some arcane and asinine reason so using the local booru is not an option if I'm not mistaken. I have next to no experience with anything networking related so I'm kinda in over my head. I just want to share my budding (no pun intended) collection (~1k files, like 2 gigs) with my friend to start his own.
>>17264 Hydrus dev, maybe you've given me the means to know the tags of my missing files already (I am the anon whose hard drive partially failed, and I lost my client.db + random "client_files" media), but I might be a bit slow. I don't know what to do at this point. With your help I produced a "my_hashes.txt" that has all the hashes of all the "client_files" media that USED TO be in my hydrus. Then, again with your help, I used the windows "dir" command to produce a "hashes.txt" of the CURRENT "client_files" media in my hydrus (which is to say, post-data loss). I don't know how to cull the filenames and file path from the "dir" command output, nor do I know how to compare both lists and output only the difference. Then past knowing the difference (which would be my missing media I want to retry), I don't know how to find out what tags they had. I forgot to mention too that with your help I produced a ".hta" file that I can import into a new hydrus install to have all my old tags. But in my tech-ignorance, I don't think that has anything to do with finding out the tags of my missing media. So I might not need to boot hydrus at all to retry my missing media, since I am missing my client.db anyway.
>>17340 >I don't know how to cull the filenames and file path from the "dir" command output >I don't know how to cull the filenames >filenames I meant file extensions/filetypes or whatever
>>17340 >I forgot to mention too that with your help I produced a ".hta" file that I can import into a new hydrus install to have all my old tags. >".hta" file I see I kept saying in previous posts it's a .hta extension, but I see now it's actually "my_hta.db"
Dev, regarding Qt 6, will you be going to the latest LTS version (6.2.x) and staying there, or will you always be upgrading to the latest version each release? For example, 6.3 releases in March, but it isn't LTS. Also, besides general UI things, do you use Qt for anything else, like webpage navigation?
>>17292 >>17315 Can someone make one for wikieat.club and waifuist.pro?
can you guys run in yunohost?
I had a good week. I fixed a heap of bugs, added a couple new ways to scale thumbnails, and finished the basics of multiple domain file search. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=tGgmv1yEI-U windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v472b/Hydrus.Network.472b.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v472b/Hydrus.Network.472b.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v472b/Hydrus.Network.472b.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v472b/Hydrus.Network.472b.-.Linux.-.Executable.tar.gz I had a good week. There are some neat new UI features and several bug fixes. new stuff Although it isn't super useful yet, searching in multiple file domains is now complete. If you click the file domain button on a search page's tag autocomplete dropdown, there is now a 'multiple locations' entry. This lets you select a union of several domains. New users will probably only see 'my files' and 'trash', but advanced users will see more, and deleted domains too. Please feel free to play with this. Autocomplete counts should have accurate ranges and the union file searches, limited though they currently are, do work. We are a step closer to fully supporting multiple 'my files' services. Under options->thumbnails, there is now a dropdown for 'thumbnail scaling'. The default remains 'scale down only', but you can now 'scale to fit' (which scales small things up) or 'scale to fill', which crops and scales up so the thumbnail fills the whole thumbnail space. Give it a go--the animation as one format changes to another is accidentally one of the coolest things I have done. If you right-click on the page tab bar, there is now a 'pages' submenu, listing all the pages at or below that level, that lets you quickly navigate to them. This is a prototype, basically a copy of the same thing you'll see in a web browser, so let me know what you think--I expect to put more time into it. bug fixes I fixed a stupid issue where search pages were refreshing on session load. I regret this slipped through (again!), so I made an explicit test to catch it in future. Sorry for the trouble! I also fixed an issue with 'do work on shutdown' cancelling repository processing and a couple of other jobs as soon as they started. The 'fast shutdown' logic was working a little too well! Images with an alpha channel are now tested on load--if the alpha channel is completely opaque, it is stripped. This saves a little memory and CPU, and it means they will be correctly detected as pixel duplicates of their non-alpha-channel'd equivalents. full list - highlights: - the file domain button of every autocomplete input now has a 'multiple locations' entry. this launches a checkboxlist of all possible search locations and allows you to search more than one domain at once. it works, too! in future, when we can have multiple 'my files' services, you'll be able to choose here unions of what to search. users in advanced mode will see repository updates, all local files, all known files, and the new deleted file domains on this list. I removed the deleted file domains from the front menu because I expect them to be rarely used - in _options->thumbnails_, there is now a 'thumbnail scaling' dropdown. you can set it so thumbs only ever scale down (which remains the default), scale to fit (i.e. very small images are also scaled up), or scale to fill. the 'animation' as thumbnails refit and delayed-regen themselves to 'scale to fill' is accidentally one of the coolest things I have done - removed the old 'EXPERIMENTAL: thumbnail fill' option. the new mode works essentially the same, but faster and higher quality - in the page tab menu, there is a new submenu 'pages', which shows all the pages at or below the current level. if you right-click on a page of pages tab, it will just show for that page of pages. click any of the entries, you will select that page. it is a web browser-like quick navigation menu, let me know what you think! - rejiggered the page tab menu a little, reordering groups a bit with nicer separators and putting 'select' navigation on the menu even if you click in greyspace - fixed a problem in page tab menu logic where if you right-clicked on greyspace, it would render the menu for the bottommost page of pages row rather than the one actually clicked - last week's update where a mouse release event will no longer fire in the shortcuts system if the mouse moved a decent distance between press and release should now work in the media viewer canvas when dragging is set to anchor the mouse in place. some advanced users may wish to try setting archive/delete to work on mouse release and use left click to drag - . - bug fixes: - fixed pages force-refreshing file queries on session load. this has never been intentional, but it slipped through again and was happening for a month or two now. I have added an explicit test to my routine to make sure this doesn't happen again, sorry for the trouble! - fixed a problem in the recent fast shutdown code that was accidentally also shutting down some maintenance work like repository processing as soon as it started, even if 'exit and force work' was chosen - all images with a completely opaque alpha channel will now have that alpha channel dropped for the new pixel hash calculation, meaning they will now match with regular non-alpha images with the same colour pixel data. in fact, all images with an opaque alpha now have that channel dropped on load, which will save a little memory and CPU any time they are handled (issue #770) - if the 'durable' temporary database exists on boot, it is now deleted and a fresh one created rather than trying to re-use the old one (which would not have any useful information anyway), and a note is made to log. one user recently had a problem where an existing corrupt temp dir was stopping boot, which this fixes - . - misc: - updated the windows build to use sqlite 3.37.2, the sqlite3 in the db dir is also updated - the deleted files system now neatly cleans up old file deletion reasons on file import and file undelete - cleared out some old thumbnail generation code, including deleting an old and now obselete optimisation where too-large thumbs were scaled down to make new thumbs rather than revisiting source. since our thumb situation is more complicated now, this is gone in favour of nicer quality thumbs and simpler code
[Expand Post]- fixed up some upcoming database maintenance code in my new modules - updated and cleaned the code in the old wx-converted checkboxlist and replaced some awkward old access routines - cleared out some old HTA archive code next week Next step in multiple local file services is a clever trash that can handle deletes from and undeletes back to multiple file services. This'll be yet another thing updated to work for n services while keeping n = 1, so not a super sexy update for now, but it may be the only tricky thing left.
What image library do you use?
(2.85 KB 273x64 165.PNG)

For a while now, I've had a page of pages called "artists" that just contains a bunch of search pages each for a single creator tag. However, I've recently hit pic related. I realize I could just add each tag to my favorites, but I like having each one be a different page because I can easily jump between them. Is there some better way to do what I'm doing? Also, I noticed that I can actually change the number of pages at which it warns me in the options. So what does it even mean by "program stability is affected"?
>>17347 >If you right-click on the page tab bar, there is now a 'pages' submenu This is really helpful, when you have too many tabs that they don't fit on your screen it can get quite slow and cumbersome to scroll with the little arrows. I noticed recently that if one tab's title gets updated (like if it's a downloader in progress) the scrolling just resets which is super annoying. This new menu will help. >Under options->thumbnails, there is now a dropdown for 'thumbnail scaling' It would be neat if there was an option to make the thumbnails themselves scale in size to fit the entire width of the thumbnail area, getting rid of the white space.
Any good Gelbooru / Danbooru like clients for hydrus yet? That's what it's really missing. It would be nice to have something like a Gelbooru interface into the Hydrus database. I've searched around, but haven't found anything. I'm surprised no one has made one yet. I don't even care about it being able to share over the internet. I just wanted it for personal browsing. I doubt if my internet connection could take going public with my near 1 million pics :p
>>17351 Have you considering just using a local install of danbooru instead of hydrus? Cuz there's some features that danbooru has that hydrus doesn't and vice versa. It's a major pain in the ass to set up, but last I tried it worked. Whether that's worth it is really up to you; for me it wasn't.
Am I an retard or is e621 downloading broken? I imported cookies with Hydrus Companion after logging in and made sure they were there in Hydrus, but I still get instant 403s.
>>17353 have you tried re-importing the e621 downloaders
>>17349 As someone with 700+ pages, nothing ever broke but the program gets (understandably) laggy at times. The program will warn you again at 500 pages though.
(5.35 KB 296x76 ClipboardImage.png)

>>17347 Hi Devanon, thanks for your hard work. I've noticed a certain problem that occurs sporadically in this version as well as the previous one. When I try to drag images from one page to another page (both of which is under a page of pages)to merge the two pages, what would happen is that the files that I dragged over would appear in an entirely new page next to the page of page. So in the pic above, I dragged 422 files from the page to the right of 'gel' to 'gel'. The files were removed from the page, and appeared next to the page of pages. Dragging files from one page to another seems to work fine on the main row (not under any page of pages)
>>17347 New version fucked dragging files into subpages. If you drag it into a page it works as usual, but trying to put them into a subpage just sends them to a new page on the main level instead.
I love that in a gallery I can select multiple galleries and right click -> copy queries to start a bundled query, but I wish bundled queries would allow me to right click -> copy query as well, getting back the original queries in the same format; I don't know if this is a bug or not, but doing that copies "n queries", where "n" is the number of queries previously pasted (so it's not useful). I don't know if this is a bug or if the queries are just not recoverable at that point, but I just wanted to make sure you knew. Thanks for reading!
I wanted to report what I think is a bug with downloader parsers: if a parser (this is also true for subsidiary parsers IIRC) has both at least a subsidiary parser and one or multiple content parser(s), and the subsidiary parser returns nothing for a page, then the content parsers will not return anything either even if content would be parsed if tried manually (tested with the "load test data" function). An example of this behavior can be found with the parser I posted in >>17121 ; this is a forum parser and I'm parsing per-post, then per-image. I had the next page parser (as a sub-gallery URL) in a top-level content parser, but it returned nothing if it encountered a page that contained no image (as the sub-parser would return nothing). A workaround I found was to put that next page parser in its own sub-parser; it's ugly but it works. Hopefully this report makes sense. Another super small bug I found in the parsers list (but may be present in other lists as well): by default, the list is ordered by name, case-sensitive, but if you open then close a parser, it'll refresh the list (probably to take name changes into account) and reorder by name but this time case-insensitive.
>>17317 Great, I am glad, thanks for letting me know! >>17318 Yeah, this is a neat idea. Hydrus has always been bad at dynamic UI like this, but this is something to think about for the future. >>17328 Damn, I'm sorry. Can you please make sure you have an executable path and %path% set in that options page for web browser launch path and then turn on help->debug->report modes->subprocess report mode and then try opening one of these links? It should pop up a bunch of information (also writing it to your log), which you should not post here, but the top line should say what command it is trying to launch. Does that seem correct, or is there an obvious problem? How about later, when it refers to stdout and stderr? Any messages in there?
>>17335 I think let those jobs continue to clear out and we'll see if that changes things. I just had another report today from someone of false positives. The thing is, the errors I know about in this system (including what that regen is fixing) should have only caused false negatives (i.e. they are pixel dupes but don't show up as such). I am going to spend a bit of time on this to investigate what is going on. I can't quite explain it. Please let me know if you discover anything new. >>17339 If you have next to no experience, trying this might just be too much of a pain in the neck for now. I have complicated ways to share files (basically running your own file repository, which is not user friendly), and I have a plan to make a simple way (getting clients to talk to each other directly using the Client API), which is nowhere near ready yet. If you just want to share some stuff with your friend now, I think your best strat is to just give him access to an ftp server or dropbox that you export the files to, or a giant .txt listing exported URLs. If it is only a couple gigs, zip it up 7zip with a password and stick it on Mega for him.
>>17339 Oh sorry, to follow up on the Hamachi question specifically, yeah I think anything hydrus related would work over Hamachi, everything is just IP addresses and ports on my end, but the setup on that is in my experience always a nightmare. Adding on setting up a hydrus file repository, which is also a pain, I think don't even get started. Just zip and Mega it to your friend, you'll have yourself a ton of hassle. >>17340 Ok, no worries. Let's do some python. I assume you don't have it, so download and install this: https://www.python.org/downloads/release/python-3910/ Scroll down, and if you are windows, you want 'windows 64 bit installer'. Install it, if it asks if you want python in your PATH you say yes, then you have python. Make a new file called hashes.py, open it in a text editor, and paste this in. You'll need to change the obvious bits for your filenames: import os with open( 'my_hashes.txt', 'r', encoding = 'utf-8' ) as f: old_hashes = set( f.read().splitlines() ) with open( 'current_hashes.txt', 'r', encoding = 'utf-8' ) as f: current_hashes = { l.split('.')[0] for l in f.read().splitlines() } missing_hashes = sorted( old_hashes - current_hashes ) with open( 'missing_hashes.txt', 'w', encoding = 'utf-8' ) as f: f.write( os.linesep.join( missing_hashes ) ) Read it too, see if you can get an idea of what I am doing. Make sure you 'my_hashes.txt' and the 'current_hashes.txt' are in the same directory as this .py file and then double-click the .py. It may throw an error if I messed up somewhere, but with luck it will create a new file 'missing_hashes.txt', that has the hashes that were in the old structure but no longer in the new. At this point, have a look at how many files we are talking. If it is 20, then we can do the next step one by one. If it is 2,000, we'll want to write a script again to handle it. Might as well do a test with one of them though. So open up the sqlite3 executable again, in the same dir as your HTA.db, and try this: .open hta.db SELECT tag FROM tags NATURAL JOIN mappings NATURAL JOIN hashes WHERE hash = X'abcdef'; .exit But instead of abcdef, put a whole hash from that text file. Fingers crossed, if all data was preserved and we migrated everything correct, it should give you a list of tags for that file. Let me know how you get on. If you plan to put these tags in another program, you may need a script to export them from the SQLite database, which I can help you with.
>>17362 >Just zip and Mega it to your friend I'd just add that solutions like croc ( https://github.com/schollz/croc ) may be more privacy-friendly (and probably faster, too) than Mega; you'd probably still want to compress everything beforehand though.
>>17343 I haven't decided yet. A user who knows a lot more about Qt than me is watching it for me. He has a test version of the program running in 6 and he has it is basically an ok transition. He has reported several bugs to them that will make our life easier if fixed, so we are waiting to see if they do that, and then I am going to put out a test build for people to try. I am going to try and have both 5 and 6 builds for a little while. I expect this to happen in the next few months. As for how fast I will update the version, I will try not to go bleeding edge. It always causes more trouble than benefit. I'm pretty sure 6 is not going to work in Windows 7 too, just as a matter of course. I'm easy though. You know more about this than me, so any advice and other feedback you have as we roll this out would be appreciated. I don't use Qt for any network stuff. Outside of UI, I use it to draw the serialised-downloader-png files, I think that is it. For the downloaders, if you are interested, I mostly rely on use 'requests' and 'beautifulsoup'. >>17345 I am afraid I don't know. If that host lets you run an executable, maybe. If it can do docker packages, then probably. If you are new to hydrus though, I strongly recommend you not try to run any hydrus servers yet. Get a feel for the client first, and if you want to try a server, run it on your home network first to learn how it all works and make sure it does what you want. >>17348 PIL (actually a fork called Pillow), OpenCV, and FFMPEG for a couple weird things like apng (and all video). I recommend them all, they each do cool things (PIL does format-specific precision, OpenCV is fast, FFMPEG is obviously amazing for any video/audio needs).
>>17349 >>17355 This old 165 limit is mostly an artifact of when we had wx instead of Qt as the UI library. There actually was a stability problem back then around this number. options->gui pages lets you change the number it moans at. Set it to 2,000 and it won't moan, including the old 500 limit. I'll rewrite the text around this too. The problem now is the laggyness and program load time etc..., so I still want to encourage people to have lean sessions if possible. >>17350 >the scrolling just resets which is super annoying Yeah, sorry, this is some Qt thing I don't think I can fix it. Maybe Qt6 has a fix, we'll see. >It would be neat if there was an option... Yeah, try the 'fill' setting under that new dropdown. It'll crop the source to make the thumb fill either by height or width, leaving no while space. Let me know if that is different to what you would like. >>17351 Here's the current options, mostly still under development: https://hydrusnetwork.github.io/hydrus/help/client_api.html Hydrus Web is neat and works on your phone. Anything using the Client API over internet is a bit of a pain to set up unless you are comfortable with hosting servers and know what a port forward is etc... >>17353 Works ok here, on the default e621 downloader. If you clear your e621 cookies under network->data->review session cookies and try again, does it work then? Maybe the downloader doesn't work for a logged in user? I know e621 has failed for people before just due to server trouble, but that usually gives 503 or whatever, not 403.
>>17356 >>17357 Thank you, sorry! I fucked this by accident with the new 'pages' submenu. I had to move some focus stuff around, somehow it broke file drag and drop but not page tab drag and drop. I'll make sure to fix it for next week. >>17358 Ah, yeah, sorry. That's a hack and a problem in the current network engine--I can't actually pull a query text from a Gallery URL, so when I do the 'copy queries', I am really copying the 'name' of the objects involve, which I set as the query text when you make them. And that 'paste multiple' uses a different label. I hope to have proper support for this when I make the next version of the downloader engine. Most of these objects need an overhaul. For now I find it is a good trick to just have a sticky note or Joplin or something working aside your client to keep track of larger pastes you do. I do this to keep track of pending subscriptions on different sites. >>17359 Thank you for these two reports. I understand the first one. I will think about this carefully. Subsidiary parsers are kind of a hack to get 'n posts per page' parsing, so I think the logic is saying 'if n = 0, then no content'. It seems like some content, like your 'next page url' is appropriate at any stage. I'm not totally sure, but I will give it some time and thought.
>>17365 >7318 Thanks! I'll look at it.
(1.09 MB 1409x789 ClipboardImage.png)

Here's an idea for pseudo video duplicates, thumbnail comparison. Reencoded videos will, necessarily, have the same thumbnails. That would help with boorus that frequently upload different resolutions and formats of the exact same video. Maybe have a maximum number of potential duplicates, and if it goes over the threshold it's ignored (e.g. black thumbnails that).
I'm using Tor as my proxy for http addresses (lets onions work), but this fucks with IPFS as its API address uses http. Turning off my http proxy lets IPFS work. I tried putting 127.0.0.1 into no_proxy, but then realized that it says you need a custom build of requests for it to work. What exactly does that mean, and how would I go about doing it? It would be nice if Hydrus could detect onion addresses and try to route traffic to/from them through Tor without using proxy settings, not sure how difficult that would be to implement.
If I move my "hydrus/db" folder from my Mac source install to my Linux (same version) source install it will just work right? And all the downloaders and options will be copied?
on mac Mojave, after updating from 438 to 439, when I try to open the app I get this error LSOpenURLsWithRole() failed with error -10810 for the file /Applications/Hydrus Network.app. here's the command I'm using to open open Hydrus\ Network.app/ --args -d=/path/to/db -v Has this come up before/thoughts on how I can resolve this?
(44.73 KB 517x405 pic 1.png)

(131.19 KB 1302x486 pic 2.png)

(113.73 KB 784x456 pic 3.png)

(44.93 KB 484x411 pic 4.png)

>>17360 >Damn, I'm sorry. Can you please make sure you have an executable path and %path% set in that options page for web browser launch path and then turn on help->debug->report modes->subprocess report mode and then try opening one of these links? It should pop up a bunch of information (also writing it to your log), which you should not post here, but the top line should say what command it is trying to launch. Does that seem correct, or is there an obvious problem? How about later, when it refers to stdout and stderr? Any messages in there? Okay, I did the following: 1- Taking the Waterfox browser as the target (which is currently the OS default browser), I changed in "Options->External Programs" the command to "/usr/share/applications/waterfox-g3.desktop" 2- Then launching any of the links in the Help menu gives me an error of "Permission Denied". See pic 1. 3- So I changed the command again in "Options->External Programs" as it was before with the command "waterfox-g3 %u". This gave me exactly the same result as shown in post >>17328 . In other words, it launches the browser but cannot send the URL to the browser. See pic 2. 4- Then I changed the command to "%path% waterfox-g3 %u". See Pic 3. 5- But it gave an error. See Pic 4.
Feature request: on galleries, a new option on right click: "Retry last page and allow search to continue", useful for forums. Right now I do search log -> scroll all the way down -> right click -> retry page and allow search to continue, so this would be a neat shortcut. This is mostly a workaround as, because of the way they're ordered (newest files last), I can't subscribe to forum threads. Thank you! (And yes, I know that you may say "no" because it doesn't work with boorus (as if ordered by default, newest files are first); this would just retry the last page without new results, doing nothing.)
>>17361 >>17362 >If you have next to no experience, trying this might just be too much of a pain in the neck for now. From the documentation it seemed like this was the case, I just wanted to make sure it was before I gave up entirely. I was planning on just doing a dropbox after reading but I was kinda curious if there was a way to just use hydrus. Thanks for the help anyhow!
(1.16 MB 800x533 e twac.gif)

>>17372 >3- So I changed the command again in "Options->External Programs" as it was before with the command "waterfox-g3 %u". This gave me exactly the same result as shown in post >>17328 . In other words, it launches the browser but cannot send the URL to the browser. See pic 2. By the way, I forgot to mention it. This command won't report any error!!!
This is a big issue for using Reddit downloaders in Hydrus. It would be very helpful if this was fixed.
>>17376 Meant to reply to >>17038
I had a mixed week--unfortunately, some IRL stuff reduced my work time. I did however clean some code, fix a couple bugs, and integrate a cool new quick-navigation widget that a user wrote. The release should be as normal tomorrow.
>>17370 that will work afaik
>>17362 Hello, second anon you replied to here: the anon who was missing his client.db + 942 media files (you didn't know how many media I was missing yet, since I never told you, sorry. But I manually culled the windows "dir" command output of the thumbnail files at the bottom and the few lines at the very top that only included the folder paths themselves. And the difference in length between the two hash files was 942). Sorry for taking so long to try your command, to help me retry my lost media. Sorry. It was general depression + this having happened to me three months ago that made me feel like lifting my fingers for this was hard, but I just tried it today. Sorry again. The command didn't work. If it helps, the "current_hashes.txt" input is like: c:\hydrus network\db\client_files\fff\ffe5db3c6791b8a4c7ea8215a128cc9602577427e5647e575e2ae642246374b9.jpg and the "my_hashes.txt" input is like: FFE5DB3C6791B8A4C7EA8215A128CC9602577427E5647E575E2AE642246374B9 And the "missing_hashes.txt" output is all caps like the second input above, except "missing_hashes.txt" has a gap between every line, unlike the two input files. The "current_hashes.txt" input has a 254,811,929 length, and 2,359,342 lines, according to the bottom of the notepad++ window. The "my_hashes.txt" input has a 153,418,395 length, and 2,360,284 lines. The "missing_hashes.txt" output has a 158,138,958 length, and 4,720,565 lines. I also tried the hta.db thing (replacing "abcdef" with the all-caps hash from above), but nothing happened. It didn't output a file, or display anything in the window. There wasn't even an error. It just did nothing, as far as I can tell.
(637.70 KB 449x682 apotheosis.png)

Hey Devman, Just wondering if you ever thought of working with a raw filesystem partition/block device? Truly throw off the yoke of the filesystem,if you will. Also I don't know if you had the time to look at it but just noting that the bug from >>>/hydrus/16565 is still present.
https://www.youtube.com/watch?v=F6OR66gZY54 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v473/Hydrus.Network.473.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v473/Hydrus.Network.473.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v473/Hydrus.Network.473.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v473/Hydrus.Network.473.-.Linux.-.Executable.tar.gz I had a mixed week. Unfortunately some IRL reduced my work time. There's a neat new widget to play with though! command palette A user has written a cool widget that helps you navigate the program by keyboard. I have integrated the first version and am interested in feedback. If you have nothing on Ctrl+P for your 'main window' shortcut set, you should get it mapped to that on update. So, hit Ctrl+P and you'll get a palette where you can type and press up/down and enter to quickly navigate to any of the pages currently open. If you are an advanced mode user, you will also search all of the menubar actions and also the current thumbnail selection menu. This latter part is unfiltered at the moment--you'll just see everything--so be careful. The system needs more polish, including filtering out these more advanced database routines, and proper display for checkbox items 'check' status, and so on. I can do a lot more with this widget, so give it a go and let me know what you think. I think some of the labels can probably be improved, and I am sure some would like to customise it a little. If you don't like Ctrl+P, just hit up file->shortcuts->the main window and re-map it! full list - misc: - fixed the recent problem with drag and dropping thumbnails to a level below the top row of pages. sorry for the trouble! - fixed a bug where the client would not load results sorting by 'import time' when the search file domain was a single deleted file domain - fixed a list display bug in the edit page parser dialog when a subsidiary page parser has two complicated string-match based content parsers - collections now sort by modified time, using the largest known modified time in their collection - added sqlite3.exe console back into the windows build--sorry, it was missing since the github build changeover! - added a note to the help about backing up when tight on space, which I will repeat here: the sqlite database files are very compressible (70GB->17GB on default 7zip settings!), so if you need more space on your backup drive, this is a good way to reclaim it - . - command palette: - a user has written a cool 'command palette' for the program! it brings up a type-and-search interface to navigate to pages or menu entries. - I have integrated his first version and set the default shortcut to Ctrl+P. users who update will get this shortcut if they have nothing else on Ctrl+P on 'main window' set. if you prefer Ctrl+K or anything else, you can change it under _file->shortcuts->the main window_ - regular users will get a page list they can search and select, advanced users will also get the (potentially dangerous) full scan of the menubar and current thumbnail right-click menu. I will be polishing this latter feature in future to filter out big maintenance jobs and show checkbox status and similar, so if you are advanced, please be careful for now - try it out, and let me know how it goes. the underlying widget is neat, and I can change its behaviour and extend it significantly - . - (mostly advanced) deleted file improvements: - files that have been deleted from a local file domain are now aware of their file deletion reason. this is visible in the right-click menu of thumb or media canvas - the advanced file deletion dialog now initialises using this stored reason. if all pending deletees have the same existing reason stored, it will display it, and if they are all set but differ, this will be noted and an option to not alter them is also available. this will come up later in niche advanced situations with mutiple file services - reversing a recent change, local file deletion reasons are no longer cleared on undelete or (re)import. they'll now hang around invisibly and initialise any future advanced file deletion dialog - updated the thumbnail and canvas undelete mechanism to handle multiple services. now, if the files are deleted in more than one domain, you will be asked to multiple-select which you wish to undelete for. if there is only one eligible undelete service, the process remains unchanged--you'll just get a yes/no confirmation if the 'confirm trash' option is set - misc multiple local file services code conversion work next week I had some success working on clever trash this week, but there's a bit more to do, and a lot of general cleanup/refactoring. An old 'my files' static reference is still used in about two hundred places, and almost all have to be updated. So I'll grind at that. I also have a whole ton of little work that has piled up. Fingers crossed, my current IRL problems clear up in a few days.
>>17381 >expecting a python kid to know how to do memory mapping higher chance of finding a jew with a foreskin
>>17383 docs.python.org/3/library/mmap.html >Memory-mapped file objects behave like both bytearray and like file objects. CPython fags done most of the work. He doesn't have to research much.
Thank you hydrus dev for making such a great product! I have a couple bugs to report and a few small features to suggest. Bugs: -The preview window doesn't stop if you open a file in a new page from the right click context menu. Make sure you have audio enabled for the preview window and click on a file with audio so that it starts playing in the preview window. Then right click and open in a new page. You'll still be able to hear the preview window from the new page. It won't stop until you go back to the original page. -Refreshing a page within a page of pages resets the selected page to the containing page of pages. Go to a page within a page of pages, and notice that you can use CTRL + TAB / CTRL + SHIFT + TAB to navigate within the page of pages. Press F5. Now when you do CTRL + TAB / CTRL + SHIFT + TAB, you'll navigate on the level of the page of pages instead of within it. Feature requests: -When comparing videos in the duplicate filter, showing if the FPS is different would be very useful. -When you set file relationships from the right click context menu or from shortcuts, it asks "Are you sure you want to do this for the selected files?". I'm always paranoid that I've accidentally selected extra files, so I'd really like if it said how many files you were changing. Like "Are you sure you want to do this for the 3 selected files?" -You can sort files by import time, but you can't sort deleted files by date deleted for some reason. -It would be nice to be able to make totally custom tag sort. For example, instead of "sort by tag + a-z + group namespace", you could do "sort by tag + custom + group namespace". Then you could have creator tags above series tags above character tags, or something like that. Thanks for all your hard work.
>>17384 mmap is just a system call, memory mapping is more than just getting page files, you need to be able to manipulate datatypes to read mapped data with constantly changing field lengths, without a filesystem you end up with a meaningless byte stream with no datatype or size while simultaneously being any one and multiple within the same stream, just reading unicode characters from a bytestream is already aids, the only way to distinguish anything in this stream let alone individual files is by assumed endiannes and dynamically typecasting your read offset during runtime , obviously not possible without void type and pointer arithmetic unless youre reading a trivial binary file
>>17067 I just tested the Pixiv downloaders right now to see, and I'm still getting the old urls for the artist downloader. The tag downloader is correctly giving the new urls but the artist downloader isn't. It's still giving the old urls. I must not have got the update somehow, but I'm on Hydrus v472.
(18.73 KB 1025x116 123.PNG)

>>17387 go to network > downloader components > manage url class links find pixiv artist gallery page api and check that it points to the new one
>>17388 I checked and it wasn't. I switched it to the new new url parser and tested it again and now it seems to work correctly. Thanks!
(540.01 KB 2112x1188 Untitled0.png)

(483.27 KB 2113x1187 Untitled.png)

Minor issue with the new pages menu - if you select files, (especially a single file), page names get longer, so it gets really huge. First image is normal menu, second image when you've selected a file. It covers my entire screen.
For other audio fags, I made a soundgasm and psstaudio post page parser. No user page parsers, though. Maybe another day.
the pixel dupe dropdown menu in the duplicate filter menu page doesn't seem to work at all. I select "must not" and I still get them anyway, then when I select "must be" I get 0 potential pairs. It's somehow not seeing them right, but they show up when I actually start the filter.
When I run a search with a limit of 2000 images that includes one tag and excludes N tags, the more tags I exclude the longer the search takes. How does the time grow as a function of N, is it linear or more than linear or less than linear? If that is not enough information to answer the question, the tag I am including is a "page:*" tag (think pixiv imagesets), the tags I am excluding are artist tags so they can contain anywhere from 1 to 10000 images, and the database has about half a million images.
(15.11 KB 319x294 ClipboardImage.png)

(1.42 KB 174x28 ClipboardImage.png)

>>17382 I've noticed that with this version, when I have an excessive amount of pages (>700 from the last time the total page count works), the overall pagecount isn't reflected. Also, some pages are stuck on initialising upon starting hydrus. Right clicking -> duplicate page does fix the problem in the duplicated page though
(9.90 KB 263x263 ClipboardImage.png)

>>17394 oh wait i restarted a couple of times and the page count is back
>>17363 Thanks for this by the way, I never heard of this before but just did a test and it was excellent. >>17368 Thank you. The first version of the duplicate system actually had something like this, it did gifs and videos, but I withdrew it 'temporarily' because back then the video thumb was the first frame and we had too many black frames and similar false positives. I believe fully monochrome images (e.g. a 1x1 pixel) are discarded from the duplicate system even now using code from that time. I have a 'big job' plan to reintroduce videos into the duplicate system using multiple carefully chosen keyframes. Me and a user have done a couple of tests and it works well, with luck it will match bigass gif clips of longer webms too. I don't know when this will all happen, but I know video dupes are a priority for many users. >>17369 Sorry, yeah, the state of per-domain proxying options is pretty bad right now. If you are very comfortably with python and run from source, you can try applying this patch to requests: <https://github.com/hydrusnetwork/hydrus/blob/master/static/build_files/docker/client/requests.patch> The guy who put the Docker build together has similar needs, afaik that fixes it natively. I am one day going to write a 'domain manager' for the client that'll let you specific proxy for any domain differently. I may be able to slip in some hardcoded handling for localhost and .onion before then. Is that the sort of thing you are pasting in? Are there any other domains you might want hardcoded handling for?
>>17370 >>17379 Yep, db folder is completely portable. Everything is stored in there. >>17371 Hey, I think I helped you one on Endchan, the answer for anyone else is unfortunately it seems Mojave is too old to run the newer Big-Sur-built app, so the solution here is to update your OS. Sadly the same may happen to Windows 7 this year as we migrate to Qt6 and Python 3.9 or 3.10. I'll put it off as long as is reasonable and try to have dual release builds while we test things out, but beyond a certain date, older OSes will have to run from source to run hydrus.
>>17372 >>17375 Forgive me if I am misunderstanding something about Linux, but would your %u in this case be the URL, if you were launching from terminal? So 'waterfox-g3 https://safebooru.donmai.us/posts/5124623'? Please try 'waterfox-g3 %path%'. That will tell hydrus to insert the URL in that position when it tries to launch from terminal. Let me know if that works or not. I will improve the text here to explain it better, and I will see if I can improve the error handling when %path% is missing, too. >>17373 Thank you, this is a good idea. I may tuck it behind help->advanced mode for your same concerns.
>>17380 No problem, thanks for letting me know the error. Let's try this instead: import os with open( 'my_hashes.txt', 'r', encoding = 'utf-8' ) as f: old_hashes = set( f.read().lower().splitlines() ) with open( 'current_hashes.txt', 'r', encoding = 'utf-8' ) as f: current_hashes = { os.path.basename( l ).split('.')[0] for l in f.read().lower().splitlines() } missing_hashes = sorted( old_hashes - current_hashes ) with open( 'missing_hashes.txt', 'w', encoding = 'utf-8' ) as f: f.write( os.linesep.join( missing_hashes ) ) Also try this, it may eliminate the double-newline problem you saw: import os with open( 'my_hashes.txt', 'r', encoding = 'utf-8' ) as f: old_hashes = set( f.read().lower().splitlines() ) with open( 'current_hashes.txt', 'r', encoding = 'utf-8' ) as f: current_hashes = { os.path.basename( l ).split('.')[0] for l in f.read().lower().splitlines() } missing_hashes = sorted( old_hashes - current_hashes ) with open( 'missing_hashes.txt', 'w', encoding = 'utf-8' ) as f: f.write( '\n'.join( missing_hashes ) ) If it throws an error, try replacing that backslash-n with backslash-backslash-n. And for the HTA, that's a shame. I felt good when we generated it, but perhaps I messed some logic up. You said it generated faster than I expected. Let's do a little test: .open hta.db SELECT tag, HEX( hash ) FROM tags NATURAL JOIN mappings NATURAL JOIN hashes LIMIT 100; .exit That'll just be some example data. Does any of it look sensible to human eyes, or is it wrong? Does it not give any data back at all?
>>17381 I don't think I'm clever/knowledgeable enough about block devices to do that, I'm afraid. And I'm happy with the current situation. If you want obscurity, you can wrap hydrus in an encrypted partition, and the main problem being solved of 'filenames and folders suck for human searching when number of files gets over 10,000' is dealt with with sha256 hashes. One thing I want to do though is split the current 256 separate 'fxx' file folders into a dynamic number of folders. Users with 5 million plus files are running into load lag, particularly when storing their files on something like a network share, just because 5M/256 is still 20k, and it takes an OS a little moment to go through a list that long. I want to be able to dynamically, in the background, migrate the client to 4096 or whatever number of folders as your client grows. And damn, thank you for the reminder about that bug. I don't remember perfectly, but I think I looked at that and could not figure it out. I am not really happy with my hover windows (they bork out macOS and Linux in weird ways), and have resolved to replace all my jank hovers (media hovers, autocomplete dropdown, and popup toaster) with the same tech I used for the new video scanbar popup. This will relieve a ton of mouse/key focus issues. >>17390 Holy fugg, great report thank you. I will see what I can do to guarantee page names here (and on the new palette) stay short. >>17392 Sorry for the trouble. There are two systems here and they will disagree on some pixel hashes. I was working on it today with a user and I think I have a fix to queue up. Please hit database->file maintenance->manage and then hurry along any remaining 'calculate pixel hash' work you still have queued up, if any. That won't fix everything here, but it will get you in a good position.
>>17385 Thanks, I am glad you like it! I am afraid I cannot reproduce that first bug--when the new page appears, the old one shuts off, and if I return to the original page, it starts over from the beginning. I will explore how it could be happening and see if I can figure out a forced fix. Ah damn, I am afraid I cannot reproduce the second one either! If I hit F5, the navigation stays on the bottom-most layer. Navigation only goes up a layer for me if the bottom-most layer has only one tab in the row. Both of these issues deal with core Qt concepts. The first relates to a 'page changed' signal that seems not to be being emitted, and the second deals with page focus navigation. Are you running from source, by any chance? (e.g. the Arch AUR package) If you hit help->about, what version of Qt do you have? Are you on PySide2 or PyQt5? The normal Windows build should have 5.15.2.1 I think, and PySide2. Are you much different from that? Feature requests: -Yeah, fps would be a good comparison in the dupe filter. Videos don't work in the dupe filter yet, but this will one day >>17396 -Great, no problem -Ah yeah, I am in the midst of the delete file domain stuff, so I am still hacking out problems and dealing with a time rewrite overall. Being able to sort by deleted time would be useful. Hope to have archive time being stored soon too. -Yeah, I really want custom namespace order in taglists. I hate having to plough through a load of characters to get to creator tags. I have it on my todo. I'd also like [+] style collapsible tree view options for the taglists so you can just temporarily hide all unnamespaced tags or whatever, but that'll have to be later. The taglists are my custom control so I have to write all changes manually.
>>17393 Hmmm, I am not sure. It should be roughly linear, and actually faster, for every excluded tag you add. At that stage of search, it basically goes: - get all file ids that have page namespace - for every exclude tag: - for every file id we found so far - if it has that tag - exclude it from the list of current file ids So basically O( N ), it'll be roughly "time_to_page_search + N * num_page_files". I can see how it might be slow overall, since it is basically eating away at the large venn diagram center 'circle' of page results with a lot of smaller artist circles. Most results will not have the artist, so on each loop the center circle isn't getting much smaller. Just because of search logic, negated searches of any sort are generally slower than positive. If you would like, please run help->debug->profiling->profile mode and pastebin or email me the log. There's a menu item on that same menu explaining more about how it works. ADVANCED: A cheat here that I have figured out recently is here: https://hydrusnetwork.github.io/hydrus/help/advanced_parents.html#parent_favourites If these artists are the same list every time, making a shorthand 'nsfw artists' parent tag can make handling them easier and also speed up your search. >>17394 >>17395 Thanks, I'll check this out. It doesn't update immediately since this can be an expensive count, I do some sleight of hand in the background and sometimes put the count off. I bet the large count is skipping some check and a resultant update somewhere. It is supposed to update any time a new page is opened or closed though, which is probably what your duplicate page call is triggering.
(11.92 KB 677x342 Untitled.png)

>>17380 Responding to myself, but maybe the hta.db thing not working might be due to the fact that it's the same hta.db from my replacement hard drive that spectacularly failed. I used testdisk to copy it (and the hash files) over from a disk image, instead of rereading the instructions on how to make them again. So maybe it got ruined by the hard drive, and if I just made it again, it would work. >>17399 Hi. I tried the new python things to output my missing hashes, which worked as intended. Thank you. It appears I am missing 946 media files I want to retry. But the hta.db thing still didn't do anything. The original command you gave me just instantly didn't do anything, but this new command paused for a second before ultimately also not doing anything, as far as I can tell. The first part of this reply (a follow-up to my own post) was something I started formatting but never submit; it was saying that the hta.db file was copied over from the disk image of a dying hard drive using testdisk, because said hard drive started appearing as unformatted in windows. Maybe the hta.db file got ruined along the way. The hta.db file is 1,313,603,584 bytes in size according to windows. My client.caches.db is 3,894,136,832 bytes. My client.master.db is 1,909,477,376 bytes. My client.mappings.db is 506,277,888 bytes. I am missing my client.db due to data loss... I guess I can just output the hta.db file again from the above three source files using the steps you provided, but I don't have the motivation right now... I will try it later to see if it will make your new command work.
>>17400 >If you want obscurity, you can wrap hydrus in an encrypted partition This is exactly what I do. Nothing is outside that partition, not even the shortcut to launch the client. Also Hydrus is not allowed to connect to the network under any circumstance.
>>17402 >It should be roughly linear, and actually faster, for every excluded tag you add. Thanks, that sounds good to me. Here's a profile log. What I did (in a standard "files" tab) was: - Activate profiling - Click an image - Right click on the creator:whoever (1) tag - Select "Exclude creator:whoever from current search" - Wait for the tab to update - Disable profiling
>>17404 completely pointless nothing you open is encrypted and malware uses payloads designed to infect the system not the program, the program is just a vector to get an embedded payload to execute with the payload itself being downloaded in infinite ways that have nothing to do with the program its embedding itself in
>>17401 >Both of these issues deal with core Qt concepts. I'm on windows and the about menu says I have Qt 5.15.2 and PySide2 5.15.2.1. I tried deleting everything except the db folder and re-extracting hydrus 472 and it still happens. >Videos don't work in the dupe filter yet Well sure they don't go there automatically, but I just send them there manually.
Could you update (or add an option for) the related tags suggestions to also consider the relationships of tags when making suggestions. So for example: >treat all tag siblings as being the same tag in the analysis and only suggesting the ideal sibling >treat tags that are parents of a tag in the file as if they're in the file too, not suggesting them either and also factoring them into the analysis as well. stuff like that. It'd be very helpful by getting rid of a bunch of suggestion noise and will probably make the useful suggestions more accurate too, by factoring in tags that it should be but aren't currently.
(247.64 KB 810x720 1585612125563.gif)

>>17383 >>17386 Many doors open when you are working with a pure stream of bytes and can decide on what exactly they mean without having to follow years of system layering cruft. >>17400 >I'm happy with the current situation. If you want... Rather than having a specific idea to adress I had asked as I was pontificating on a toy filesystem project at the time and had a thought about how even though it does it's best to take advantage of not having to expose a folder structure it was kinda sad that Hydrus still had to work around filesystems engineered primarily around the notion of directory trees containing files. Consider it one of those platonic ideal features. >This will relieve a ton of mouse/key focus issues. Here's to hoping that works out,I imagine the alternative would be annoying,since you would have to dive into some messy input gathering if it is a more fundamental problem. >>17404 > Hydrus is not allowed to connect to the network under any circumstance. That sure is gimping hydrus functionality pretty hard. Are you trying to protect yourself from some specific attack vector or is it just autism?
>>17409 >Many doors open when you are working with a pure stream of bytes and can decide on what exactly they mean without having to follow years of system layering cruft. It reminds me of the Assembler guys.
>>17409 >Are you trying to protect yourself from some specific attack vector or is it just autism? Pure autism. All files are imported and tagged by hand as I don't need bulk downloads and I enter around 50 files a week, so it can be done while keeping security tight.
How can I have a setup where an image URL (image as in ...something.com/something.png) is imported and either given a tag or associated with a URL class. Right now I have that "working" by having a URL class and a dummy parser that does nothing, but this causes the client to lag very noticeably proportional to the filesize (probably the parser trying to parse the image data?).
I had a great week working on a whole bunch of little fixes and improvements. The release should be as normal tomorrow.
I noticed that in Hydrus, entering a search with an unnamespaced tag will also bring up any files with a namespaced version of that tag too. How do you search specifically for the unnamespaced tag only and not also the namespaced ones? Also, is there a way to just turn that merging behavior off for actual results? I'm fine with the suggestion box suggestion the namespaced versions, but when I actually enter the search, I only want it without namespaced most of the time. I didn't even know that namespaced versions were being included until now.
https://www.youtube.com/watch?v=JYGb9HRCCyg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v474/Hydrus.Network.474.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v474/Hydrus.Network.474.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v474/Hydrus.Network.474.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v474/Hydrus.Network.474.-.Linux.-.Executable.tar.gz I had a great week working on small fixes and improvements. There's nothing earth-shattering here to highlight, but just a mix of little work. full list - command palette: - the guy who put the command pallete together has fixed a 'show palette' bug some people encountered (issue #1060) - he also added mouse support! - he added support to show checkable menu items too, and I integrated this for the menubar (lightning bolt icon) items - I added a line to the default QSS that I think fixes the odd icon/text background colours some users saw in the command palette - . - misc: - file archive times are now recorded in the background. there's no load/search/sort yet, but this will be added in future - under 'manage shortcuts', there is a new checkbox to rename left- and right-click to primary- and secondary- in the shortcuts UI. if you have a flipped mouse or any other odd situation, try it out - if a file storage location does not have enough free disk space for a file, or if it just has <100MB generally, the client now throws up a popup to say what happened specifically with instructions to shut down and fix now and automatically pauses subscriptions, paged file import queues, and import folders. this test occurs before the attempt to copy the file into place. free space isn't actually checked over and over, it is cached for up to an hour depending on the last free space amount - this 'paused all regular imports' mode is also now fired any time any simple file-add action fails to copy. at this stage, we are talking 'device disconnected' and 'device failed' style errors, so might as well pause everything just to be careful - when the downloader hits a post url that spawns several subsidiary downloads (for instance on pixiv and artstation when you have a multi-file post), the status of that parent post is now 'completed', a new status to represent 'good, but not direct file'. new download queues will then present '3N' and '3 successful' summary counts that actually correspond to number of files rather than number of successful items - pages now give a concise 'summary name' of 'name - num_files - import progress' (it also eli...des for longer names) for menus and the new command palette, which unlike the older status-bar-based strings are always available and will stop clients with many pages becoming multi-wide-column-menu-hell - improved apng parsing. hydrus can now detect that pngs are actually apngs for (hopefully) all types of valid apng. it turns out some weird apngs have some additional header data, but I wrote a new chunk parser that should figure it all out - with luck, users who have window focus issues when closing a child window (e.g. close review services, the main gui does not get focus back), should now see that happen (issue #1063). this may need some more work, so let me know - the session weight count in the 'pages' menu now updates on any add thumbs, remove thumbs, or thumbnail panel swap. this _should_ be fast all the time, and buffer nicely if it is ever overwhelmed, but let me know if you have a madlad session and get significant new lag when you watch a downloader bring in new files - a user came up with a clever idea to efficiently target regenerations for the recent fix to pixel duplicate calculations for images with opaque alpha channels, so this week I will queue up some pixel hash regeneration. it does not fix every file with an opaque alpha channel, but it should help out. it also shouldn't take _all_ that long to clear this queue out. lastly, I renamed that file maintenance job from 'calculate file pixel hash' to 'regenerate pixel duplicate data' - the various duplicate system actions on thumbnails now specify the number of files being acted on in the yes/no dialog - fixed a bug when searching in complicated multi-file-service domains on a client that has been on for a long time (some data used here was being reset in regular db maintenance) - fixed a bug where for very unlucky byte sizes, for instance 188213746, the client was flipping between two different output values (e.g. 179MB/180MB) on subsequent calls (issue #1068) - after some user profiles and experimental testing, rebalanced some optimisations in sibling and parent calculation. fingers crossed, some larger sibling groups with worst-case numbers should calculate more efficiently - if sibling/parent calculation hits a heavy bump and takes a really long time to do a job during 'normal' time, the whole system now takes a much longer break (half an hour) before continuing - . - boring stuff: - the delete dialog has basic multiple local file service support ready for that expansion. it no longer refers to the old static 'my files' service identifier. I think it will need some user-friendly more polish once that feature is in - the 'migrate tags' dialog's file service filtering now supports n local file services, and 'all local files' - updated the build scripts to force windows server 2019 (and macos-11). github is rolling out windows 2022 as the new latest, and there's a couple of things to iron out first on our end. this is probably going to happen this year though, along with Qt6 and python 3.9, which will all mean end of life for windows 7 in our built hydrus release - removed the spare platform-specific github workflow scripts from the static folder--I wanted these as a sort of backup, but they never proved useful and needed to be synced on all changes next week More like this, I think, and some general code cleanup.
Is there a way to view files with no last viewed data? Last viewed before x days ago doesn't show them.
>>17414 In every booru website, there isn't such a thing as searching namespaces... what tags do you have where the namespaced version and unnamespaced version are different? >>17416 You can sort by time: last viewed time + oldest first.
>>17417 Is there a way to exclude items that match the time last viewed system search, so you get files in a random order that haven't been viewed in the last 7 days?
>>17417 I have a few creator and character tags that are the same as some unnamespaced tag. I thought the whole point of namespaces was so that they could be called the same thing without there being a collision, like in programming. If that's not it then I'm confused about what namespaces are supposed to be used for. For example I have files with "windmill" "character:windmill" and "creator:windmill". In these 3 cases, "windmill" means something different, but searching for just "windmill" returns results for all 3. That doesn't really make sense. Why would I want all 3 when they're not really the same thing? Another one that's really annoying me is "rock" and "music:rock". Am I supposed to have it be "rock" and "music:rock (music)" instead to make sure they're seen differently? At that point, what's the point of the namespace?
>>17419 >Why would I want all 3 when they're not really the same thing? >At that point, what's the point of the namespace? I use A LOT namespaces and for me they're ultra handy because they allow me to spot the differences on the fly when searching and narrow what I want with ultra fine precision.
(117.26 KB 579x734 1.png)

(133.25 KB 871x736 2.png)

(185.97 KB 1084x677 3.png)

(198.74 KB 1337x670 4.png)

(464.87 KB 2732x2048 yvpi6Z.jpg)

>>17398 >Forgive me if I am misunderstanding something about Linux, but would your %u in this case be the URL, if you were launching from terminal? So 'waterfox-g3 https://safebooru.donmai.us/posts/5124623'? >Please try 'waterfox-g3 %path%' SUCCESS!!! Changing the "%u" or "@@u %u @@" (depends which browser it is) for "%path%" made the trick. A clarification is needed. I used those "%u" and "@@u %u @@" because those are the command used by the OS to launch the browsers. See pics 1 and 2. So in the "Options/external programs" dialog, I changed the commands as you instructed and now Hydrus launches all the links successfully. See pics 3 and 4. Also launching from the Terminal is successful. Thank you so much anon.
>>17403 Hmm, maybe the HTA got ruined, but normally when that happens, as you've seen, SQLite throws a load of errors when you try to read stuff from it. Looking again at my method in >>17253 , I am sorry to say that it looks 'good' to me. The INSERT line from current_mappings_x should give a whole bunch of stuff no matter what. The subsequent hash and tag lines that populate the definitions also look correct. Try this: .open hta.db SELECT COUNT ( * ) FROM mappings; SELECT COUNT ( * ) FROM tags; SELECT COUNT ( * ) FROM hashes; .exit If each table is 0, then that explains why the operation was so fast--there was nothing to copy. This could be because the table was emptied somehow in damage, or it could just mean the 'x' in 'current_mappings_x' was chosen wrong. Maybe double-check with your source client.mappings.db file that you had x chosen correct as with the method in >>17264 . If mappings has count>0, but tags or hashes have count 0, then I messed up something in >>17253 EDIT: Ah shit, if the HTA is 1.3GB, it definitely has some stuff in it. I bet I messed something up, so let me know the COUNT numbers you get as above. >>17405 Thanks. Your 15 second killer seems to be at line 6074 of that, if you are interested. The actual (7.8 million rows of) database queries are only 1.3s, about 11.5s is some pre-run optimisation, basically prepping the large venn diagram circle over and over. I will examine this more and do some tests and see if I can re-use the optimisation when we have a bunch of tags to search in a row like this.
>>17407 Damn, that's annoying. I figured you were running from source or something. I am not sure what is going on. Basically when you say 'create a new page', the new page triggers a 'page has changed' event, which tells the original page to blank out the current media, which obviously should silence video. So, why yours is not doing that core Qt thing, I am not sure. Similar with the page key navigation thing. That tab navigation is handled by Qt, I don't think I have any code that interferes with it, so I am not sure why my client is handling it different to yours. This is a long shot, but is there any chance you have a custom Window Manager in your client? Something that changes the appearance of your UI and maybe adds some other hooks, like extra buttons on a window's title bar or remembering previous window position? Maaaaaybe something like that is interfering with core window signals in some way. Otherwise, I just have some odd bug in my code that's firing for you but I can't replicate. I'll have another look. Can you do a couple more tests for me? Saying 'open in new page' when a vid is in preview doesn't shut it off, but how about other page transitions? If you just switch to another page, that works ok? If you have a vid in preview but hit F9->files->my files, does that work ok, or does the vid keep going? If this only ever happens when you right-click->open in a new page, then it must be something specifically with that, rather than spawning new pages in general.
>>17408 Thanks, this is a great idea. I haven't touched related tags in years, but funnily enough I am talking to some other users about improving its stats today as well. I'll see what I can do now I have some nicer sibling and parent lookup calls. >>17409 Yeah, on this file system stuff, I think future filesystems may benefit from purely hash-based lookup for certain files. System dlls, sort of thing, and then potentially stuff like our read-only media. Or have filenames, but back everything that hasn't been modified in n months with hash metadata too. So often when I am copying a bunch of big files from one folder to another across my network, I get a filename conflict and it'd be great if it had already pre-computed the hash of those files and could provide a better 'merge?' confirmation dialog rather than worrying me about overwriting. But I'm probably talking way out of my expertise. >>17412 I am not 100% sure what you want here, but how about a "File URL" URL Class? I'm assuming your URLs here are all on the same domain and in the same sort of format. File URLs don't cause any parsing, they tell the downloader that this file is an actual file. There should be some examples in 'manage url classes'. You can't auto-assign tags to url classes yet, but you could batch it manually with a search that searched for (has the url class, -tag) and then you ctrl+a, f3, enter the tag, hit enter.
>>17414 You can't turn that merging off yet, but I hope to add a cog icon to the search dropdown, something like that, to offer more options for specific search settings. There will also be options for turning off the weird-character removal that basically vanishes/merges parentheses and brackets and stuff in tag search. The bullshit way of forcing this search is probably: samus aran -character:samus aran -series:samus aran, whatever That'll work, but it'd be annoying to put in over and over, so may only be appropriate for a saved favourite search on a specific tag. >>17416 There isn't an elegant way. For a new user, I'd say 'search for files with no viewtime' (system:file viewing statistics), but I expect you are looking for files that are in the gap since I added 'last view time'. I may add this search just for legacy users. >>17418 For files that do have a last viewed time, just go 'system:last viewed>7 days ago' to find things you haven't seen recently. But of course that doesn't include anything that has yet to be seen. I was kind of thinking of adding a legacy fill-in action you could trigger like 'for every file in the database, if it doesn't have a view time, set that view time to ten minutes after it was imported'. Just to give you good dummy data. I could do the same with the upcoming 'archive time' data. What are your feelings on that? >>17417 I thought there would be more when I started hydrus, but they are mostly rare. 'harry potter' and 'batman' are good examples of a similar problem, though, where in hydrus they can be character: or series: For unnamespaced, we had a nightmare some years ago with 'character:shadow' and 'shadow' and 'character:shadow the hedgehog' sibling fucking everything up for a bit. 'worm' and 'series:worm' (and even 'series:worms', now I think of it, for the vidya) would be another example, and I guess on a booru they'd go 'worm_(series)'. >>17421 Great, I am glad we got you sorted. I am sorry about the confusion. I'll fix up that text explanation.
>>17419 >That doesn't really make sense. Why would I want all 3 when they're not really the same thing? most boorus don't have namespaces when searching, but files do have namespaces themselves. it's helpful to be able to differentiate between character and series and general tags. for example, safebooru.org. other boorus don't have namespaces at all. for example, boorus on the booru.org domain. so you might have a file with the tag "samus aran" and another file with the tag "character:samus aran". obviously, in this case, these ARE the same thing. that's why they're merged. >Am I supposed to have it be "rock" and "music:rock (music)" instead to make sure they're seen differently? yes? that's what every booru on the internet has been doing for years. why is this new to you?
(11.04 KB 677x342 Untitled.png)

>>17422 First anon you responded to here. This is what I see, using the same hta.db file. I haven't remade the hta.db yet, to check if that would make your second command from earlier return anything. To clarify, of the two commands from before, the first command was done immediately after producing the original hta.db file. The second command was done after using testdisk to copy that hta.db file from the disk image of an HDD that started appearing as unformatted in windows on reboot. So it's only the second (and third) command that could possibly benefit from remaking the hta.db, unless the hta.db was just produced broken at the start because the HDD it was on was failing the entire time.
>>17423 >If you just switch to another page, that works ok? Yes, the video stops. >If you have a vid in preview but hit F9->files->my files, does that work ok, or does the vid keep going? It keeps going. >This is a long shot, but is there any chance you have a custom Window Manager in your client? I don't know what that is so probably not. But I do have this for Windows Explorer. http://qttabbar.wikidot.com/ Based on the name, I guess it also uses Qt. Could that have anything to do with it?
(1.50 MB 800x450 1 - laughing.mp4)

>>17426 >why is this new to you? Cos he's a newfag.
Is there a way to tell the DB to delete all best/worst relationships between all non-deleted files?
>>17186 For the record, I was having the same issue with that particular downloader, and what happened is that it needed me to actually also import e-hentai cookies into Hydrus through HC, not just exhentai ones
>>17419 >For example I have files with "windmill" "character:windmill" and "creator:windmill". In these 3 cases, "windmill" means something different, but searching for just "windmill" returns results for all 3. That doesn't really make sense. Why would I want all 3 when they're not really the same thing? Precisely that's the beauty of that feature, because before to hit enter and load the thumbnails, it allows you to load all files with "windmill" or, to select from the list in the "Search" panel exactly what you want.
Hey, I finally moved to new programming software this week (WingIDE to PyCharm). It was a little jarring dealing with new UI and shortcuts and a billion new settings, so I mostly stuck to simple code cleanup to get to grips with it. I don't have much exciting in my changelog beyond a tweak to system:hash, so rather than put out a thin build tomorrow, I will do some more work instead and move the release on a week. If you haven't seen it, though, please check out the new help a user put together: https://hydrusnetwork.github.io/hydrus/ It has nice features like search and tables of contents and will be easier to edit in future. 475 will have it too for the local copy and should be out on the 2nd of March.
(55.67 KB 360x396 be6890d.jpg)

>>17433 >If you haven't seen it, though, please check out the new help a user put together: https://hydrusnetwork.github.io/hydrus/ Checked. An impressive improvement with a lot of details better explained.
(1.37 KB 129x131 ClipboardImage.png)

Is there any way to rearrange the ratings position on the top right? I would like to order them in a certain way
>>17433 Thanks man! You're program works GREAT! I've downloaded over a million pics with it now.
>>17433 The new help works without JavaScript so it's fine by me, but I personally liked the style of the old pages.
>>17436 Make sure to create backups. You only need to backup the four .db files in your Hydrus Network\db directory to backup everything but the stuff you archive themselves.
So the guide says there's no bandwidth limit for downloading the PTR but there is for me, is that just because I have a new account or something? Is there a setting I'm missing? Or is that section of the guide just outdated?
>>17439 *client, not account, which now that I type that out I don't know how that'd even work with a shared account I guess.
>>17439 >>17440 Not aware of any PTR limits, but the Hydrus client has a myriad of bandwidth rules that can be configured. You probably have hit a limit configured in Hydrus. I believe there is a 512MB/day default, but I could be misremembering. Go to Network --> Data --> Review bandwidth use and edit rules. Highlight "hydrus service: public tag repository" and click "edit default bandwidth rules". A new window with usage details will open. Click "set specific rules". Here you can check if Hydrus has a bandwidth limit set for the PTR and add/edit/delete any rules to your liking.
hey, any chance we can add a simple area on the how boned am i or about menu or a new menu on that tab dropdown that just lists all the hydrus install locations (install/root dir, thumbnails, db, files)? This is very nice for people with multiple locations. Also I noticed the file "views" (like when you view a file it adds a counter) doesn't work over client api (a file viewed on say Lolisnatcher Droid doesn't make the counter go up it seems)
>>17433 why not just work in Vim or something to keep a consistent environment?
>>17368 where can I scrape this artists content? booru? what is the name of the artist?
>>17441 Bless, anon. In the small off chance someone else has the same question, just for clarification its double-click PTR > edit rules. I couldn't find "set specific rules" in the edit defaults window unless I'm retarded, what I tried above worked though. Still though, I never would have gotten close without you, thanks again.
Maybe a dumb question: Is there a easy way to merge dbs? I read what i could but i only found: reimport the client_files-folder (without tags) or how to split db locations. Nothing really hinted a way to import a db. Reason: I found my 3 older usb-hdds with very different sets of files (only ~10% duplicates)
>>17444 >what is the name of the artist? it's right there in the image. lazyprocrastinator
(97.94 KB 928x481 1.png)

(29.10 KB 681x782 2.PNG)

>>17445 TIL you can just double click those. Glad you got it sorted though! I should have cinluded pictures, but I was being lazy. The "set specific rules" button is at the bottom right of the window, just above the close button.
>>17427 Ok, thank you, we have good data but I messed up the hashes retrieval--I think I missed 'cma.' on the front of hashes table--so let's try again: .open my_hta.db CREATE TABLE hash_type ( hash_type INTEGER ); CREATE TABLE hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES ); CREATE UNIQUE INDEX hashes_hash_index ON hashes ( hash ); CREATE TABLE mappings ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ); CREATE INDEX mappings_hash_id_index ON mappings ( hash_id ); CREATE TABLE namespaces ( namespace TEXT ); CREATE TABLE tags ( tag_id INTEGER PRIMARY KEY, tag TEXT ); CREATE UNIQUE INDEX tags_tag_index ON tags ( tag ); INSERT INTO hash_type ( hash_type ) VALUES ( 2 ); ATTACH "client.mappings.db" as cm; INSERT INTO main.mappings SELECT hash_id, tag_id FROM current_mappings_x; ATTACH "client.master.db" as cma; INSERT INTO main.hashes SELECT DISTINCT hash_id, hash FROM current_mappings_x CROSS JOIN cma.hashes USING ( hash_id ); INSERT INTO main.tags SELECT DISTINCT tag_id, namespace || ":" || subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace != ""; INSERT INTO main.tags SELECT DISTINCT tag_id, subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace == ""; INSERT INTO main.namespaces SELECT namespace FROM cma.namespaces; .exit If that doesn't work, try this: .open my_hta.db CREATE TABLE hash_type ( hash_type INTEGER ); CREATE TABLE hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES ); CREATE UNIQUE INDEX hashes_hash_index ON hashes ( hash ); CREATE TABLE mappings ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ); CREATE INDEX mappings_hash_id_index ON mappings ( hash_id ); CREATE TABLE namespaces ( namespace TEXT ); CREATE TABLE tags ( tag_id INTEGER PRIMARY KEY, tag TEXT ); CREATE UNIQUE INDEX tags_tag_index ON tags ( tag ); INSERT INTO hash_type ( hash_type ) VALUES ( 2 ); ATTACH "client.mappings.db" as cm; INSERT INTO main.mappings SELECT hash_id, tag_id FROM current_mappings_x; ATTACH "client.master.db" as cma; ATTACH "client.caches.db" as cca; INSERT INTO main.hashes SELECT DISTINCT hash_id, hash FROM current_mappings_x CROSS JOIN cca.local_hashes_cache USING ( hash_id ); INSERT INTO main.tags SELECT DISTINCT tag_id, namespace || ":" || subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace != ""; INSERT INTO main.tags SELECT DISTINCT tag_id, subtag FROM current_mappings_x NATURAL JOIN cma.tags NATURAL JOIN cma.namespaces NATURAL JOIN subtags WHERE namespace == ""; INSERT INTO main.namespaces SELECT namespace FROM cma.namespaces; .exit Try doing the 'count() hashes' line on the new HTA. If it gives a count > 0, we are good!
>>17428 Ah wow, I wonder if that QTTabBar is doing it. I'm not an expert in how these sorts of shell extensions work, but maybe there is a dll in memory or an OS-level event hook somehow intercepting a C++ level signal. I know some other system-level Qt stuff can change what hydrus's Qt can see. Is it too inconvenient to ask you to try uninstalling it, rebooting your computer, and testing hydrus again? You can reinstall it afterwards, I just want to know better what is causing this. >>17430 Yeah, I think try going 'system:file relationships>0 duplicates', then ctrl+a the thumbs, then right-click->manage->file relationships->set relationship->remove/reset for all selected->dissolve these files' duplicate groups completely. This is a potentially big operation and I don't know how perfectly it will work, so I recommend doing a small test first and possibly just making a backup before you try it. >>17435 I think they are alphabetical to service name atm. I agree that more layout options here would be better. >>17436 Thanks, I am really glad you like it. I still don't know what a million files really means, but I think it is cool. I will reiterate what >>17438 says--make sure you have a good regular backup. I've been helping two guys today recover busted databases due to hard drive failure and no recent backup, it really sucks to happen.
>>17439 >>17445 The initial limit is a forced throttle from me to slow down your initial sync. Trying to sync everything really fast works but it can lead to some pretty hellish maintenance bumps (think the client freezing for ten minutes for a few times at the end), which I still need to iron out in code. You are good to let it rip, but just bear in mind the processing time in a week or so when siblings need to crunch may be heavy, so leave your client on in the background so it can work in its own time. >>17446 Not yet, but I am currently working on a 'big job' to allow multiple local file services. Several users will want to merge their multiple databases into one when I am done with this work, at which point I will write a method inside the program to import another database (or similar).
>>17449 Thank you, the first one worked, after going back and relearning what to replace "current_mappings_x" with. But since I am missing 946 media files, do you know of a way to automatically list out the tags of each hash like it's shown in the window? You said before I can with some scripting, but, I don't know how to script, so if you could spoonfeed me one, I will be grateful. For now I'll just try to find out what I can retry as is though.
>>17452 Fantastic. So we had, before, this: .open hta.db SELECT tag FROM tags NATURAL JOIN mappings NATURAL JOIN hashes WHERE hash = X'abcdef'; .exit And we want to do that for every missing hash. Let's see if we can do it in python again. So, open a folder with the hta and missing_hashes.txt, then save the contents of this block into a file called 'export.py' in it too. It should make an executable script that you can double-click since it is .py, but if not, open a terminal in that folder and type 'python export.py'. We are making a script and running it rather than copy/pasting into the python interpreter because there are indented spaces here that will be awkward to enter manually. You can try to paste it into 'python', but I think you'd want to do it all in one go if so. import sqlite3 import os with open( 'missing_hashes.txt', 'r', encoding = 'utf-8' ) as f: missing_hashes = set( f.read().splitlines() ) db = sqlite3.connect( 'hta.db', isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES ) c = db.cursor() with open( 'hashes_and_tags.txt', 'w', encoding = 'utf-8' ) as f: for hash in missing_hashes: tags = sorted( ( tag for ( tag, ) in c.execute( 'SELECT tag FROM tags NATURAL JOIN mappings NATURAL JOIN hashes WHERE hash = ?;', ( sqlite3.Binary( bytes.fromhex( hash ) ), ) ) ) ) f.write( 'hash: ' + hash ) f.write( os.linesep ) f.write( ', '.join( tags ) ) f.write( os.linesep * 2 ) And fingers crossed it will make a new txt file with all the hashes listed with the old tags they had. Feel free to edit the script and play around with different formattings. And as usual, if it breaks, let me know.
Edited last time by hydrus_dev on 02/27/2022 (Sun) 01:18:52.
>>17453 It worked perfectly, thank you. Thank you again. The order isn't alphabetical according to has anymore, and I can't discern any pattern with it. But it wasn't like hashes in alphabetical order had any pattern to it, either. I was in the process of formatting another post of complaining of this having happened to me in the first place. I was going to go over the same things I had before. It wouldn't have added any new information beyond the fact that I just realized the damage is extremely clustered, which is far worse than what I originally thought, of my losing 946 random media files across 4.7 million. It's instead, if I can't retry one artist, a huge chunk of their gallery is lost to me forever. At the time I had only manually tried the first 10 in alphabetical order, and was shocked to find that the 7th and 8th were by the same artist. Then I noticed the 2nd and 10th were by the same artist. Then the 11th shared the same artist as the first pair. In reformatting cause of your reply, I won't go over the same tired details again. But, in short, to anyone reading this, you only need to backup your four ".db" files in "Hydrus Network\db" to backup your hydrus environment as you know it. Obviously ideally you would backup all your media. But this ensures you're not left with an unsorted hoard that could never be trusted to be a complete archive ever again. For me, I always knew backing up was an option. But, I was abused into being ashamed of even the notion of wanting to protect myself such that I prevent anything bad ever happened to me. So I couldn't cope with even educating myself on how to backup. Then, when my data got corrupted, I couldn't cope with educating myself on how to properly compensate for it in the moment, so in my tech-ignorant panic, I rendered my corrupted data irrecoverable forever. Again, I'm not going over everything I was originally going to bitch and moan about. But in short, if your data is compromised, image your hard drive as the first step, so you have infinite retries to recover the compromised data. My mistake was decrypting my HDD first, because when I had last tried to clone it when moving to a bigger drive, the software I used changed my veracrypt (encryption software) password, so I knew I couldn't trust it to be an accurate copy. Since before I had decrypted the entire thing, cloned it, then re-encrypted both, I tried that when my data got corrupted, which rendered it irrecoverable forever. I also learned you could use "testdisk" to isolate corrupted files to copy them over, which I assume fixes them. So in the event you're scared to shut down your PC, or you still don't have access to fully imagine your entire drive, you can try at least copying over your four .db files to save them, if any get corrupted. My original draft of this was far more emotionally-invested, but, whatever. I can see that of my 946 missing media, 72 of them are from a twitter that the artist wiped themselves, and are still using. I even recognize the artist, and have been manually saving what they post since I became unable to boo my hydrus. So this is likely a pattern I will see a lot when retrying my media. Don't let this happen to you. I know people say shit all the time of "you're worth it" and whatever. But, for me, it went in one ear, and out the other. To me, no one ever showed me empathy. No one ever gave a fuck about me. All this type of shit. Creating a backup before anything bad happened to my data was all I ever wanted, but, I knew I could never reach it. Again, I couldn't even cope with educating myself on how to do it. I wish someone held my hand on educating me on how to backup. Telling me "read this to learn how to backup, to protect yourself before anything bad happens to you", when I knew I could never backup all my media, just made me feel alone. I couldn't cope with reading it on my own. I had no one to talk to about my fear for data loss, and why I couldn't just prevent it, or anything. Life is really unfair. Some people hurt others, and could never care about the damage they cause. I guess the new point I'm making is, I can't just sit here and think saying "you deserve backups" will get through to anyone. It wouldn't have gotten through to me. Even if someone were to ask why I had no backups, I might not have responded "because I could never afford one", and instead might have only said "because I can't afford it", which doesn't preserve my lack of hope of ever getting one. Then with the latter reply, I would only be given an out of context backup link I know I would never read. I guess it's impossible. But some of the people you talk to beyond the screen are really damaged. For someone who values archival to be unable to cope with educating themselves on how to backup their data. I guess my main point is that an out of context "backup" link was impossible for me to read. I wish someone had copy pasted a relevant excerpt of it for me, or paraphrased the method themselves. But I guess it's unrealistic to consciously consider the hurt that people don't express themselves. In the end my data would've been recoverable anyway had I had someone to talk to in the moment, or I was educated beforehand on how to operate in the event of data being compromised, so I could just mindlessly go through the already-educated motions, rather than tech-ignorant panic. But all the data loss horror stories I ever heard always ended on the lesson of "backup", which was a reality I could've never reached. I had never read a data loss horror story in the wild about what to do when it happens to you. I feel like I tried my best, but still compromised the one thing that was most important to me. I feel it's so unfair. But time continues passing, I guess.
Is there a way to force Hydrus to always, by default, launch an file in the external program when I double left-click it? I'm still preferring using Picasa photo viewer because the mouse-wheel zoom function is exactly the way I like it, and I think I'd prefer it if i didn't have to always "right-click->open->in external program" every single file.
>>17455 go to options: media select "image", click "edit" and set media viewer action to "do not show in the media viewer. on thumbnail activation, open externally" if picasa is your default image viewer for your os, you're good. if not, go to options: external programs and set the launch path to call picasa for image filetypes
>>17450 >Yeah, I think try going 'system:file relationships>0 duplicates', then ctrl+a the thumbs, then right-click->manage->file relationships->set relationship->remove/reset for all selected->dissolve these files' duplicate groups completely. Wouldn't that also remove best/worst with deleted files?
Does anyone exclusively use some sort of front-end and keep hydrus in a docker/remote server? I would like to do this ideally because the resources that the hydrus backend uses (even while idling) and the amount of continuous gallery watching I have running really make more sense as a 24/7 service on another box. The problem is that as far as I can tell, the front-ends available are very basic or designed for casual phone browsing instead of being serious clients. The best one I tried was hydrus-web, which doesn't even have sorting functionality. Am I missing something? It feels like hydrus is entirely designed to run as a database to be accessed via API calls, and yet there's not much available to consume those calls?
>>17456 Thanks so much! Though I actually have a new problem. I immediately realize that now, because Picasa doesn't understand hydrus' file system, the left/right arrow keys don't work, so using Picasa requires you to close and re-open every image. I wonder, the default Hydrus viewer is not that bad, but I am trying to figure out how to change the keybinds, to change it from ctrl+mousewheel for zoom, to just mousewheel to zoom (basically swap the functionality). I actually can't find any keybind control so far, though I'm starting to import and tag more files and am liking it more each day.
>>17459 You'll want to take a look at file > shortcuts > media viewers - all, and media viewers - 'normal' browser.
Newish user here. Quick question. Is the unnamespaced-ness of a tag supposed to, itself, mean something (like how in most online boorus general tags mean that it's referring to what's happening in the image or video) or are they supposed to be completely generic and not mean anything beyond the literal text? The help website only briefly mentions namespaces, and it doesn't mention how you're supposed to use them in relation to unnamespaced tags, so I'm a little confused about how to use them or what kinds of namespaces to make. I've never heard of tag namespaces before this (although visually they remind me of properties) so I don't really know if there's a standard practice here.
>>17461 Also, is there a way to give favorited tags a special highlight or underline or something to make them easier to find when you're looking at the tags of files
I've been backing up my database from a Windows system to a Linux server from time to time, and I've been needing to use the GUI tool to manually change the paths to the client files and thumbnails each time I do the backup. Is it possible to make a script that could do this automatically with sqlite3?
Dev, did you ever decide what to do about the user agent for Sankaku Complex being set as Firefox 56? I've had it set to 93 for months now without issue, if that helps.
>>17464 Loli poster complaining about technological issues
I had a good couple of weeks. I updated my behind the scenes environment and cleared out a wide variety of misc work--bug fixes and little improvements. The built release will also get a local copy of the nice new help a user put together. The release should be as normal tomorrow. >>17464 Thanks for reminding me about this. I've rolled in an update for this header for tomorrow. If you still have the old '56' one, it'll replace it with '97' string. This won't affect you, but it will affect new users and those who have not edited it.
>>17461 namespaces aren't real. for example, you can't search for "character:samus_aran" on a booru. you just search "samus_aran", and someone has decided that the "samus_aran" tag is a character. so it shows up separately from the other tags. they're really just to keep character and series and creator tags separate from other tags so that you can easily identify them from general tags. that's just how boorus work. >Is the unnamespaced-ness of a tag supposed to, itself, mean something in general, it's supposed to mean it's a general tag. but there are cases where a tag is unnamespaced because it's a character/artist/etc that's too unknown to be marked as being a character/artist/etc, or unnamespaced because the website hydrus got it from doesn't make these distinctions in the first place. so no, not really. >I'm a little confused about how to use them or what kinds of namespaces to make. just use the namespaces that the boorus already use. import a file from gelbooru and it'll have series, character, creator, and meta tags. i don't think there's a need to create your own namespaces in most cases. i've made a couple namespaces for meta reasons, like a "pixiv id" namespace so that i can easily export files with their pixiv id as the filename, but that's it. in actuality, all i've said here is my own bias. you can make your own namespaces and organize your files in whatever way you want. you can make namespaces mean something! but that just makes it harder to interface with the stuff that's already out there, like all the booru downloaders and the PTR. hydrus is great for automating things. if you're going to make your own booru system from the ground up, you're going to have a hard time automating things. which defeats the entire point. it's supposed to make dealing with your files easier, not harder. >>17462 i don't think so but that would be a sweet feature. if you really want this, you could make a namespace called "favorites" and give it its own color then go to tag siblings and make the namespaced one the ideal.
>>17456 Thanks, I saw "shortcuts" but didn't understand that they were keybinds. I'm too used to programs with the menu called "keybinds" or "controls." By the way, is it intentional that the mouse cursor doesn't actually move when panning an image? This seems like a bug. For example: >Left-click and hold in bottom-right corner of screen >Drag mouse/pan image to top-left >Let go of left-click >Mouse is still in bottom-right corner Am I the only one who finds this odd? I wonder if this would be better if the mouse re-appeared when you stop panning at its new position.
Dumb noob questions: 1) How portable is Hydrus? What are options for having multiple instances on one computer (e.g. one for recipes and one for reaction images)? 2) Is there any plans for a generic downloader for *.booru.org, or for detecting a certain widely-used booru software for a given URL?
>>17469 You can have one client point to multiple databases by using the --db_dir="path/to/hydrus_db" switch when running the client.
>>17463 Looks like I got it. This command did the trick: sqlite3 client.db "UPDATE client_files_locations SET location = REPLACE(location,'/old/client_files/path','/new/client_files/path'); UPDATE client_files_locations SET location = REPLACE(location,'/old/tb/path','new/tb/path');" The db booted fine and found all the files. I also tested the changes with "SELECT * from client_files_locations;" before and after the update commands. Very simple, but I had never played with sqlite before.
>>17469 >1) How portable is Hydrus? What are options for having multiple instances on one computer (e.g. one for recipes and one for reaction images)? https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#installing >By default, hydrus stores all its data—options, files, subscriptions, everything—entirely inside its own directory. You can extract it to a usb stick, move it from one place to another, have multiple installs for multiple purposes, wrap it all up inside a truecrypt volume, whatever you like. The .exe installer writes some unavoidable uninstall registry stuff to Windows, but the 'installed' client itself will run fine if you manually move it. For multiple databases, you could extract the .zip to multiple places or like >>17470 said just point the client to different dbs with different launch arguments. You could take a look at https://hydrusnetwork.github.io/hydrus/database_migration.html and https://hydrusnetwork.github.io/hydrus/launch_arguments.html. >2) Is there any plans for a generic downloader for *.booru.org, or for detecting a certain widely-used booru software for a given URL? https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Booru.org Pretty sure you just need the URL classes "booru.org_file_url", "booru.org_post_page", and "booru.org_search_gallery". Hydrus should come with the parsers and should be able to link them together automatically. GUGs for *.booru.org websites will let you do a tag search from within hydrus. Unlike the generic ones above, you need one for each website. If you want GUGs for specific websites you can go into the folder there. If you want to know how it works: https://hydrusnetwork.github.io/hydrus/downloader_url_classes.html https://hydrusnetwork.github.io/hydrus/downloader_gugs.html https://hydrusnetwork.github.io/hydrus/downloader_parsers.html
https://www.youtube.com/watch?v=rNFLCB_T2hA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v475/Hydrus.Network.475.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v475/Hydrus.Network.475.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v475/Hydrus.Network.475.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v475/Hydrus.Network.475.-.Linux.-.Executable.tar.gz I had a good couple of weeks. There's a long changelog of small items and some new help. new help A user has converted all my old handcoded help html to template markup and now the help is automatically built with MkDocs. It now looks nicer for more situations, has automatically generated tables of contents, a darkmode, and even in-built search. It has been live for a week here: https://hydrusnetwork.github.io/hydrus/ And is with v475 rolled into the builds too, so you'll have it on your hard disk. Users who run from source will need to build it themselves if they want the local copy, but it is real easy, just one line you can fold into an update script: https://hydrusnetwork.github.io/hydrus/about_docs.html I am happy with how this turned out and am very thankful to the user who put the work in to make the migration. It all converted to this new format without any big problems. misc highlights I queue up some more files for metadata rescans, hopefully fixing some more apngs and figuring out some audio-only mp4s. System:hash now supports 'is not', so if you want to paste a ton of hashes you can now say 'but not any of these specific files'. Searches with lots of -negated tags should be a good bit faster now. I fixed a bug that was stopping duplicate pages from saving changes to their search. pycharm I moved to a new IDE (the software that you use to program with) this week, moving from a jank old WingIDE environment to new PyCharm. It took a bit of time to get familiar with it, so the first week was mostly me doing simple code cleanup to learn the shortcuts and so on, but I am overall very happy with it. It is very powerful and customisable, and it can handle a variety of new tech better. It might be another few weeks before I am 100% productivity with it, but I am now more ready to move to python 3.9 and Qt 6 later in the year. full list - new help docs: - the hydrus help is now built from markup using MkDocs! it now looks nicer and has search and automatically generated tables of contents and so on. please check it out. a user converted _all_ my old handwritten html to markup and figured out a migration process. thank you very much to this user. - the help has pretty much the same structure, but online it has moved up a directory from https://hydrusnetwork.github.io/hydrus/help to https://hydrusnetwork.github.io/hydrus. all the old links should redirect in any case, so it isn't a big deal, but I have updated the various places in the program and my social media that have direct links. let me know if you have any trouble - if you run from source and want a local copy of the help, you can build your own as here: https://hydrusnetwork.github.io/hydrus/about_docs.html . it is super simple, it just takes one extra step. Or just download and extract one of the archive builds - if you run from source, hit _help->open help_, and don't have help built, the client now gives you a dialog to open the online help or see the guide to build your help
[Expand Post]- the help got another round of updates in the second week, some fixed URLs and things and the start of the integration of the 'simple help' written by a user - I added a screenshot and a bit more text to the 'backing up' help to show how to set up FreeFileSync for a good simple backup - I added a list of some quick links back in to the main index page of the help - I wrote an unlinked 'after_distaster' page for the help that collects my 'ok we finished recovering your broken database, now use your pain to maintain a backup in future' spiel, which I will point people to in future - . - misc: - fixed a bug where changes to the search space in a duplicate filter page were not sticking after the first time they were changed. this was related to a recent 'does page have changes?' optimisation--it was giving a false negative for this page type (issue #1079) - fixed a bug when searching for both 'media' and 'preview' view count/viewtime simultaneously (issue #1089, issue #1090) - added support for audio-only mp4 files. these would previously generally fail, sometimes be read as m4a. all m4as are scheduled for a metadata regen scan - improved some mpeg-4 container parsing to better differentiate these types - now we have great apng detection, all pngs with apparent 'bitrate' over 0.85 bits/pixel will be scheduled for an 'is this actually an apng?' scan. this 0.85 isn't a perfect number and won't find extremely well-compressed pixel apngs, but it covers a good amount without causing a metadata regen for every png we own - system:hash now supports 'is' and 'is not', if you want to, say, exclude a list of hashes from a search - fixed some 'is not' parsing in the system predicate parser - when you drag and drop a thumbnail to export it from the program, the preview media viewer now pauses that file (just as the full media viewer does) rather than clears it - when you change the page away while previewing media with duration, the client now remembers if you were paused or playing and restores that state when you return to that page - folded in a new and improved Deviant Art page parser written by a user. it should be better about getting the highest quality image in unusual situations - running a search with a large file pool and multiple negated tags, negated namespaces, and/or negated wildcards should be significantly faster. an optimisation that was previously repeated for each negated tag search is now performed for all of them as a group with a little inter-job overhead added. should make '(big) system:inbox -character x, -character y, -character z' like lightning compared to before - added a 'unless namespace is a number' to 'tag presentation' options, which will show the full tag for tags like '16:9' when you have 'show namespaces' unticked - altered a path normalisation check when you add a file or thumbnail location in 'migrate database'--if it fails to normalise symlinks, it now just gives a warning and lets you continue. fingers crossed, this permits rclone mounts for file storage (issue #1084) - when a 'check for missing/invalid file' maintenance job runs, it now prints all the hashes of missing or invalid files to a nice simple newline-separated list .txt in the error directory. this is an easy to work with hash record, useful for later recovery - fixed numerous instances where logs and texts I was writing could create too many newline characters on Windows. it was confusing some reader software and showing as double-spaced taglists and similar for exported sidecar files and profile logs - I think I fixed a bug, when crawling for file paths, where on Windows some network file paths were being detected incorrectly as directories and causing parse errors - fixed a broken command in the release build so the windows installer executable should correctly get 'v475' as its version metadata (previously this was blank), which should help some software managers that use this info to decide to do updates (issue #1071)
- some cleanup: - replaced last instances of EVT_CLOSE wx wrapper with proper Qt code - did a heap of very minor code cleanup jobs all across the program, mostly just to get into pycharm - clarified the help text in _options->external programs_ regarding %path% variable - . - pycharm: - as a side note, I finally moved from my jank old WingIDE IDE to PyCharm in this release. I am overall happy with it--it is clearly very powerful and customisable--but adjusting after about ten or twelve years of Wing was a bit awkward. I am very much a person of habit, and it will take me a little while to get fully used to the new shortcuts and UI and so on, but PyCharm does everything that is critical for me, supports many modern coding concepts, and will work well as we move to python 3.9 and beyond next week The past few months have been messy in scheduling as I have dealt with some IRL things. That's thankfully mostly done now, so I am now returning to my old schedule of cleanup/small/medium/small week rotation. Next week will be a 'medium size' job week. I'm going to lay the groundwork for 'post time' parsing in the downloader and folding that cleverly into 'modified date' for searching and sorting purposes. I am not sure I can 'finish' it, but we'll see.
Is there a way to download from pixiv with translated tags? I know you can do it with pixivutil but I'd rather download it directly into Hydrus
>>17473 >New Help installed in hard drive I'm using "Hydrus.Network.475.-.Linux.-.Executable.tar.gz" It looks like there is a problem with Hydrus finding the path. When executing "Help and getting started guide" command, a new browser window is launched with 4 tabs, all 4 fail to open the address or target file. See pics.
>>17476 So, taking in count that the Librewolf browser is a Flatpack and it uses a strange path setup, I switched to the Waterfox browser for a perhaps more normal path. But the results were the same.
(24.68 KB 989x636 Screenshot_20220303_175840.png)

(553.17 KB 1366x11654 index.jpg)

(139.34 KB 1406x778 Screenshot_20220303_180653.png)

>>17477 Then I investigated further. Looking at the fourth tab, I noticed that is looking for the file "index.html" into the help directory, so I dived into the Hydrus directory and I found it at: /media/*spoilered/*spoilered***/Tagged files/Vault 1/Hydrus Network/help/index.html Clicking on it launched the Librewolf OS default browser and it showed a very strange address (it is a Flatpack browser version) and a messed up page. See pics 1 and 2. Then I opened that file with Waterfox and it showed as should, looking exactly awesome as the internet version. See pic 3.
(173.56 KB 1170x1442 1642350572863.jpg)

hello
>>17450 >try uninstalling it, rebooting your computer, and testing hydrus again Did not fix. Still continues playing. I also tried in v475. Also, I noticed that sometimes in the duplicate filter, the "duplicate filter: back" button/shortcut doesn't work, even though the "duplicate filter: skip" button/shortcut does. Sometimes if I make a decision (like setting a file as better/worse) it starts working again, other times I have to close and reopen the duplicate filter.
>>17235 Hey, checking back in after using hydrus for about 2 months now. Love it even more. I was able to understand most things with the help documents - those were very useful. 2 things I have not figured out yet - Is there a way to set a default zoom level when viewing images in the viewer? I find that it zooms my webms to fit the screen - and at max zoom it causes the viewer to have issues. is there a way to disable the watcher, so that copying a link doesn't start to download a thread? I also have 1 recommendation I think would be cool to add based on how I use the program - this is just how I use it though, not sure if others would find use out of this or if it would be difficult to do. when you are in the image viewer, and you hover on the left screen and it shows a list of tags - it would be cool and helpful if at the top - or maybe if you hover the mouse on the bottom of the screen and it brought up a similar window - if you could have like a quick select "pick tags to assign to image being viewed based on predefined or manually selected tags" Thanks again for all your hard work - this is an awesome program!
(189.46 KB 1220x956 9f309.png)

Videos won't play either in the bottom left preview window nor the media viewer, both remain blank. They were working fine in v474. I'm using "Hydrus.Network.475.-.Linux.-.Executable.tar.gz"
>>17466 >Thanks for reminding me about this. I've rolled in an update for this header for tomorrow. If you still have the old '56' one, it'll replace it with '97' string. This won't affect you, but it will affect new users and those who have not edited it. Thanks!
(866.32 KB 1354x1500 49865.png)

>>17482 UPDATE. Today I launched Hydrus and the videos are playing fine. I can't explain what happened.
I made a vocaroo.com parser. Surprised one didn't exist already. Unless I just missed it.
>>17454 I am glad we got the HTA fixed finally. Let me know if there is anything else technical you would like help with. But otherwise am I right in saying your client is working again? I read your whole post. I'm not 100% sure, but I'm wondering in a sense if for many people these lessons are just learned through scars. I lost 75,000 files when I was younger and lost my drive, a ton of memories and short vids from the Bush years. After that pain, I fixed myself. In the same way, I never touch a hot stove now, but I did when I was young, even though I had been told not to. I'm not expert enough to know the correct answer, but I do know bad things happen no matter what we try and we can never be perfect after growing up. I hope you feel better in future. >>17457 Ah damn, I didn't think of that bit. You just want to delete the relationships for files currently in 'my files'? I am not sure that is currently possible, since the system is not about tying files together with ropes (which could be cut), but merging bubbles. If you want to break a bubble apart again, you then have to figure out which parts go into which 'half'. The only tool available atomises the bubbles completely so you can rebuild it. Maybe the 'good' news is that deleted files are by definition out of the client, so you won't see them again, at least for the meantime. I guess it depends on what you want to do with your duplicates now and in future. When I next do a big push on duplicates, I may be adding more tools here. Maybe I can add a 'pull droplet out of group' kind of action, rather than atomising/dissolving the entire media group. We are still suffering here from terrible UI visibility of what groups look like, which I regret. I'll keep working. If you just want to set a different 'best' file in each group, then I think you want to look in those same menus for 'make the best file of its group', sometimes called setting the 'king' of a group'. This action can also be set in the shortcuts system.
>>17458 One possible, there's a Docker package on the github: https://github.com/hydrusnetwork/hydrus/pkgs/container/hydrus I don't know Docker stuff much, and I didn't put this package together, but the guy who did runs his client on another box and VNCs in in some way. But if you want a power method for dialing into a full cloud client, I think this is it. >>17462 Yeah, this would be cool. Not available yet, but more colours would be nice. Also custom namespace sorting, so you can wade through a big list easier for what you want. I'd like to put all the character tags below creator when I 'sort by namespace'. >>17468 There's an imperfect option under options->media called 'RECOMMEND WINDOWS ONLY: hide and anchor mouse cursor on media viewer drags'. If you turn that off, I think it'll just drag like 'normal'. But I am not sure how good that mode is. I made the drags anchor so you can pan a whole heap while super zoomed, but no worries if you don't like it. Let me know what options you would like and I can roll them in. >>17469 >>17472 In the future, url classes will support multiple domains. Wildcard would be ideal yeah, so we can finally support *.booru.org.
>>17486 >I guess it depends on what you want to do with your duplicates now and in future. I always delete better/worse because I treat worse duplicates as inferiors with no reason to be in the DB, whereas extremely similar files with any small difference are alternates. The worse files currently in the DB shouldn't be in the DB (in theory), which is why I want to re-check those files and either delete them or change the relationship into alternates. But at the same time I want to keep the relationships with the deleted files because sometimes I try to import deleted files and I want to be able to see the "better" alternate that's in the client if it exists.
>>17476 >>17477 >>17478 Damn, it looks like it is breaking the request up into several, and splitting the launch path by whitespace. Your "/Tagged files/Vault 1/Hydrus Network/help/index.html" became ( "/Tagged", "files/Vault", "1/Hydrus", "Network/help"/index.html ) and the browser freaked out at what the domain of each of those was. Normally in the terminal you can resolve this by surrounding the path with quote marks ("), but the way I launch the browser here that should be kind of happening automatically. You are the guy we were doing 'external programs' browser stuff with, right? You could try to replace your %path% bit in the options to "%path%", but I am not sure if that will work. I'll do some experimentation on my side. Maybe in some cases I need to add that myself. >>17480 Damn, I'm sorry for the trouble. I'll add an option to force a fix for you. >>17481 Great, I am really glad you like it. 1) The options available are under options->media. Check the multi-column list below. It has debug-tier UI, but have a poke around and you'll be able to set either canvas zoom, largest fit, or 100% for broader 'animation' style filetypes or specifically 'image/jpeg' kind of thing. 2) No quick shortcut way to pause this, I don't think, but hit network->downloaders->watch clipboard for urls->watcher urls. I'd like more options here, like watching just for urls from site x, y, z, but it'll need some UI (actually not unlike the media UI in 1). Your recommendation for quick-editing the tag list in the media viewer is a good idea. Several users have asked for something like this, and I agree. Tags in hydrus are complicated, so sometimes having quick/simple UI is difficult, but there really should be an easier way to do the common things you do. If it helps in the meantime, you can have the tag manager open while you browse media in the viewer. Just hit page up/down on the tag manager's tag text input box when it is empty. Also, under file->shortcuts, you can actually assign, under 'media' shortcut set, shortcut keys to 'set tag x', which is super helpful for very quick processing tags.
>>17482 >>17484 Sorry for the trouble. If this happens again, please hit help->about when the client is broken. If mpv fails to import, opening the 'about' should give you a popup with import failure information that you can send to me. If help->about doesn't give any hassle, and says a nice 'mpv api version = 1.109' or something, then the problem will be something else. Let me know if you learn any more. Could be something like a graphics driver update crashing mpv while it was loaded. Everything is a little duct tape when it comes to mpv. >>17485 Thanks, this is awesome! I'll check it out and roll it into the defaults. I often want to import a vocaroo mp3 into my client, I never thought to make a downloader for the site. >>17488 Ah, this might be more doable. Try hitting 'system:file relationships' and then selecting the button on that panel for 'system:is not the best quality file of its group'. This will search for anything that was set 'worse' at any point. You can right-click and say 'show duplicates' to see the file with its duplicates. This is super awkward, but it should work. However, I just tested this and it looks like saying 'set these as alternate', when the two files are already duplicate just doesn't do anything atm. I will see if I can figure out something of a fix here.
(203.08 KB 362x360 e9hnm7.gif)

>>17489 >You are the guy we were doing 'external programs' browser stuff with, right? Yes, same fag. >You could try to replace your %path% bit in the options to "%path%", but I am not sure if that will work. It works!!! That made the trick. Thanks.
I have a lot of ram, but an hdd as my only drive. How can I best take advantage of the ram in Hydrus to minimize slowness? Does it make sense to keep a bunch of pages open so the files stay in memory? Currently a have a lot of search pages open that have a fair degree of overlap. Do the files that are loaded in a page take up more resources for each page they're open in, or do they only take up the resources once?
(28.54 KB 1498x284 file_id.png)

Is common for the file_id to have a bigger number than the number of files in my database even when I haven’t delete a single file?
anyone know why i can drag and drop images from a chromium browser but not a firefox browser? seems like a small issue but any help with this would be amazing
My client.db was recently corrupted. Luckily I had a backup from two days earlier so I didn't lose much data. I now have about a dozen tiles that have tags but no actual file, showing up as the Hydrus logo on a red background. Is there some way to recover these? What caused them to appear? The only file that I restored was the client.db, so my guess is that it's some sort of mismatch between the old client.db and current filesystem.
Is there a way to download all images from a single hashtag on twitter?
I had a good week bringing two neat new features: a user has implemented tag autocomplete search for the Client API, and I managed to get 'post time' saving from the downloader system to the database and augmenting 'modified time'. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=tigTaObQORM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v476/Hydrus.Network.476.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v476/Hydrus.Network.476.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v476/Hydrus.Network.476.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v476/Hydrus.Network.476.-.Linux.-.Executable.tar.gz I had a good week integrating two new features: autocomplete tag search in the client api, and saved 'post times' from downloaders. post times 'Modified time' is neat, but it isn't super useful for downloaded files--since the file was only just added to your hard drive, it'll always be the same as import time. This week I integrate the 'source time' we parse from various websites to improve the modified time for downloaded files. The objective is to make 'modified' a fairly decent 'this file was completed around this time' number for searching and sorting purposes. I'm being careful not to overwrite anything. The client now saves its best 'source time' for every different site it downloads from and then the earliest of those + modified date is used as the aggregate modified date. You don't have to do anything, but with luck you will see your new watcher and gallery files start to get some nicer modified times in the media viewer and thumbnail right-click menu. There are many potential future expansions here. I can grab better post times from sites, show and edit every stored timestamp in UI, allow clever search and sort of those specifically (e.g. 'sort all these files by their danbooru post time), and most importantly make some sort of maintenance system to retroactively fetch a good post time for all the files we downloaded before post times were saved. This is just a first step. I integrated the new 'archive time' too during this work. This now shows in the media viewer and thumbnail right-click menu similarly and can be sorted by. Search will come soon. I also want to think about optionally filling in some estimate dummy data here for all the files we archived before timestamps were tracked. client api autocomplete search A user has helpfully written Client API routines for autocomplete tag search, which is something I have had trouble fitting in. I appreciate the work. This should let the various tools that use the Client API do more tag browsing in future. The documentation is here: https://hydrusnetwork.github.io/hydrus/developer_api.html#add_tags_search_tags There are several ways to expand this too, so if you are an API dev interested in it, let me know how it goes. full list - domain modified times - the downloader now saves the 'source time' (or, if none was parsed, 'creation time') for each file import object to the database when a file import is completed. separate timestamps are tracked for every domain you download from, and a file's number can update to an earlier time if a new one comes in for that domain - I overhauled how hydrus stores timestamps in each media object and added these domain timestamps to it. now, when you see 'modified time', it is the minimum of the file modified time and all recorded domain modified times. this aggregated modfified time works for sort in UI and when sorting before applying system:limit, and it also works for system:modified time search. the search may be slow in some situations--let me know - I also added the very recent 'archived' timestamps into this new object and added sort for archived time too. 'archived 3 minutes ago' style text will appear in thumbnail right-click menus and the media viewer top status text - in future, I will add search for archive time; more display, search, and sort for modified time (for specific domains); and also figure out a dialog so you can manually edit these timestamps in case of problems - I also expect to write an optional 'fill in dummy data' routine for the archived timestamps for files archived before I started tracking these timestamps. something like 'for all archived files, put in an archive time 20% between import time and now', but maybe there is a better way of doing it, let me know if you have any ideas. we'll only get one shot at this, so maybe we can do a better estimate with closer analysis - in the longer future, I expect import/export support for this data and maintenance routines to retroactively populate the domain data based on hitting up known urls again, so all us long-time users can backfill in nicer post times for all our downloaded files - . - searching tags on client api: - a user has helped me out by writing autocomplete tag search for the client api, under /add_tags/search_tags. I normally do not accept pull requests like this, but the guy did a great job and I have not been able to fit this in myself despite wanting it a lot - I added some bells and whistles--py 3.8 support, tag sorting, filtering results according to any api permissions, and some unit tests - at the moment, it searches the 'storage' domain that you see in a manage tags dialog, i.e. without siblings collapsed. I can and will expand it to support more options in future. please give it a go and let me know what you think - client api version is now 26 - . - misc - when you edit something in a multi-column list, I think I have updated every single one so the selection is preserved through the edit. annoyingly and confusingly on most of the old lists, for instance subscriptions, the 'ghost' of the selection focus would bump up one position after an edit. now it should stay the same even if you rename etc... and if you have multiple selected/edited - I _think_ I fixed a bug in the selected files taglist where, in some combination of changing the tag service of the page and then loading up a favourite search, the taglist could get stuck on the previous tag domain. typically this would look as if the page's taglist had nothing in it no matter what files were selected
[Expand Post]- if you set some files as 'alternates' when they are already 'duplicates', this now works (previously it did nothing). the non-kings of the group will be extracted from the duplicate group and applied as new alts - added a 'BUGFIX' checkbox to 'gui pages' options page that forces a 'hide page' signal to the current page when creating a new page. we'll see if this patches a weird error or if more work is needed - added some protections against viewing files when the image/video file has (incorrectly) 0 width or height - added support for viewing non-image/video files in the duplicate filter. there are advanced ways to get unusual files in here, and until now a pdf or something would throw an error about having 0 width next week Back to multiple local file services, which is in endgame. I have a ton of ancient file handling code to simply clean to newer standards.
>>17498 The tag autocomplete feature is going to be a lot of fun to try out. I've been making a booru-style frontend in Flask just as a way to teach myself more about Python and various webdev and server stuff, and the client API and its associated Python module are a lot of fun to work with. As for the future of tag searching with the API, it would be cool to be able to retrieve something like the "selection tags" section of the Hydrus search bar, along with an optional limit for how many results to retrieve, so we'd get all the related tags for a particular search.
>>17499 here I asked about post times months ago too, since I've been wanting a way to look through an artist's work from a historical perspective. Thanks again, devanon
>>17492 Keep your client open all the time in the background. Its normal maintenance routines will keep touching your OS's disk cache of its database and stuff like your disk directory tree for quick file access. Hit up options->speed and memory and keep the image cache and thumbnail cache decently high. No need to go crazy into GB territory, but you can give it 256MB and slow timeout for thumbs without breaking a sweat and save yourself some latency if you revisit an old page. You are right that if you have overlap, the client saves space. It only needs one media object loaded even if the file is in there different pages. There is some extra memory used in that case for things like the multiple thumbnail grids, but surplus data there gets cleaned up every few minutes. I am not sure if it makes sense to keep files loaded in pages. That'll slow your boot time significantly and probably covers too large a speculation for real world use. I would suggest trying to have a lean client, since I/O will always be at a premium, but leave the client on and leave pages you are working on open while you'll want them. Let me know how you get on and if you learn anything neat about running this way! >>17493 Yeah, you are good. I use file_id for perceptual hash definitions too, so you'll get 2 for every static image. I might use it for some other things too. There's nothing to worry about. >>17494 That stuff is normally permission related. Windows won't let you DnD from a 'running as admin' program to one that isn't, and vice versa. There are subtler permissions too, less powerful than 'running as admin', that also stop it. Hydrus->discord has run into this problem a lot as discord have some setting about whether the drag and drop is move vs copy. Unless I have misunderstood--if this is more complicated than just the DnD mouse going to the 'NO' cursor, can you explain it more? If I try and DnD an image from Firefox, I usually get some 'temp bitmap' bullshit that doesn't work--is that what you mean?
>>17495 Since you rolled back the .db files but not the client_files structure, those will be files that you deleted in the two days since you made the backup. Your old database thinks you still have them, but when it checks the disk, they are gone. The maintenance to handle this is either wading through any popup errors and deleting the files again, or hit up database->file maintenance->manage scheduled jobs and then 'add new work', click the 'all media files' button to load all your stuff, and then add an appropriate job type. "If file is missing, remove record" is probably right for your situation, since you want to re-delete stuff you already deleted, but if you aren't certain by the files and their tags that that is what happened, then be more careful. There are other jobs you can add there. I can also help more if you like. >>17496 I don't think so. The twitter searcher I have in the defaults only works on username. If you feel extremely brave, you can try writing your own downloader to share, but if I remember right, the twitter API we use now is pretty hellish to go through. >>17499 >>17500 Thanks, I am glad you are enjoying things. That's an interesting idea about getting 'selection tags'. Let me know how autocomplete search works and we'll see where we go from there. Another user is working on set/delete 'notes' support on the API btw, should be in for next week.
>>17502 Notes support sounds great too. For source time, I took a look at some new downloads in the media viewer, but I didn't see a reference to source time - just time imported and file modified. Am I looking in the wrong place? Auto-complete could be a while because I'm a noob, but it'll be fun to work on. I'll let you know how it goes. also #7777 get?!
(32.96 KB 597x755 Capture.PNG)

Who is the retard here, the parser or me? The parser seems to have broken and I am trying to fix it, it recognizes file urls as gallery urls so I'm trying to exclude "show" from galleries
I had a good week. As well as some general fixes and more file modified time work, there is also a user-written expansion to the Client API that adds 'notes' editing support and a bit of fun. The release should be as normal tomorrow. >>17504 I wrote out a long explanation, but I just caught myself up in overly worded bullshit that wasn't clear. Long story short, your thing may be matching wrong because of a bug I fix in tomorrow's release. Please give that a go and let me know how you get on. Also make sure that URL Class only has one path component when you are ready to try this again. It isn't matching in your screenshot because there is both 'post' and the 'post(?!...)' regex. I'll fix up the error here to be more helpful too.
>>17503 Lucky 7s Yeah, for source time, I have never recorded it permanently until now. It comes from the downloader--if you hit the 'file log' on a downloader page, you'll see it there. Some downloader parsers can grab it, and subscriptions and thread watchers will calculate 'x posts in the past y time units' file velocity to determine when to next run. Now the value is piped down to the database and folded into an expanded, aggregated modified time, which is now the minimum of the disk modified time and all recorded source times for all domains the client has seen the file on. Tomorrow I expand source time again by also checking for any modified-time in the response headers of the actual file GET. A bunch of sites that never gave a good source time like safebooru will now do it inherently.
The hydrus video player won't play one of my video files. It's an opus/av1 webm. hydrus recognizes it as "matroska" instead of "webm" so maybe that's related to the issue.
>>17507 also, the video plays when I open in mpv directly. The builtin mpv player that hydrus uses is what's not working.
https://www.youtube.com/watch?v=mVG77xTPH6E windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v477/Hydrus.Network.477.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v477/Hydrus.Network.477.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v477/Hydrus.Network.477.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v477/Hydrus.Network.477.-.Linux.-.Executable.tar.gz I had a good week. There is a mix of small work, an expansion to the Client API, and a bit of fun. misc The network engine now pulls source time directly from file downloads if the server provides a date. This means a whole bunch of sites that haven't provided a good source time until now suddenly do, which improves the new aggregate modified time and also subscription and watcher check timings. With our new apng parsing tech, I fixed up apng duration parsing, which was until now relying on a fallback default of 24 fps if ffmpeg couldn't figure it out. All apngs are scheduled for another scan. I fixed an important precedence bug in the network engine that matches URLs to URL Classes. If you have been making a downloader and had Gallery URLs matching as Post URLs, please give it another go. Sorry for the trouble! Client API A user has written a cool expansion to the Client API, which I appreciate. You can now fetch, set, and delete file notes! If you are an API dev, check out the documentation for the new calls (fetching notes is now a parameter on file_metadata). He also made technical improvements. The Client API now supports far longer GET requests, up to 2MB of URL if needed, and the whole API has tentative and experimental support for CBOR instead of JSON if you wish. file history chart Another user has for some time been playing around with drawing charts of a client's file history in matplotlib using raw database data. You may have run his script yourself. We have been talking for a while about integrating this into hydrus, and this week I finally got around to implementing it in QtCharts. Please hit 'view file history' on the help menu to see the new chart. This is a simple, first attempt on my end, but it should show you a cool history of how many files you have had. If you have been using the client for any time, the lines for deleted files and inbox will be very incomplete, but this data will fill out in time. This was fun to do, and I learned a bit more about QtCharts. I fixed a couple of ugly things in the bandwidth bar chart I made before, and I think I'll do some more here too. I have a thought to start drawing some of our other data, let's say file size or number of file views, and seeing if pareto or normal distributions pop out. Anyway, let me know what you think, and feel free to share your file history chart! full list - misc: - the network engine now parses the 'last-modified' response header for raw files. if this time is earlier than any parsed source time, it is used as the source time and saved to the new 'domain modified time' system. this provides decent post time parsing for a bunch of sites by default, which will also help for subscription timing and similar - to get better apng duration, updated the apng parser to count up every frame duration separately. previously, if ffmpeg couldn't figure it out, I was just defaulting to 24 fps and estimating. now it is calculated properly, and for variable framerate apngs too. all apngs are scheduled for a metadata regen this week. thanks to the user who submitted some long apngs where this problem was apparent - fixed a bug in the network engine filter that figures out url class precedence. url classes with more parameters were being accidentally sorted above those with more path components, which was messing with some url class matching and automatic parser linking - improved the message when an url class fails to match because the given url has too few path components - fixed a time delta display bug where it could say '2 years, 12 months' and similar, which was due to a rounding issue on 30 day months and the, for example, 362nd day of the year - fixed a little bug where if you forced an archive action on an already archived file, that file would appear to get a fake newer archived timestamp in UI until you restarted - updated the default nitter parsers to pull a creator tag. this seemed to not have been actually done when previously thought - the image renderer now handles certain broken files better, including files truncated to 0 size by disk problem. a proper error popup is made, and file integrity and rescan jobs are scheduled - . - file history chart: - for a long time, a user has been generating some cool charts on file history (how many files you've had in your db over time, how many were deleted, etc...) in matplotlib. you may have run his script before on your own database. we've been talking a while about integrating it into the client, and this week I finally got around to it and implemented it in QtCharts. please check out the new 'view file history' underneath Mr Bones's entry in the help menu. I would like to do more in this area, and now I have learned a little more about QtCharts I'd like to revisit and polish up my old bandwidth charts and think more about drawing some normal curves and so on of other interesting data. let me know what you think! - I did brush up a couple things with the bandwidth bar chart already, improving date display and the y axis label format
[Expand Post]- . - client api: - a user has written several expansions for the client api. I really appreciate the work - the client api now has note support! there is a new 'add notes' permission, 'include_notes' parameter in 'file_metadata' to fetch notes, and 'set_notes' and 'delete_notes' POST commands - the system predicate parser now supports note system preds - hydrus now supports bigger GET requests, up to 2 megabytes total length (which will help if you are sending a big json search object via GET) - and the client api now supports CBOR as an alternate to JSON, if requested (via content-type header for POST, 'cbor' arg for GET). CBOR is basically a compressed byte-friendly version of JSON that works a bit faster and is more accessible in some lower level languages - cbor2 is now in the requirements.txt(s), and about->help shows it too - I added a little api help on CBOR - I integrated the guy's unit tests for the new notes support into the main hydrus test suite - the client api version is now 27 - I added links to the client api help to this new list of hydrus-related projects on github, which was helpfully compiled by another user: https://github.com/stars/hydrusnetwork/lists/hydrus-related-projects next week Next week is cleanup. I will focus on clearing out old code, particularly in file handling for multiple local file services.
Here's my chart! Two notes: 1) My inbox is a little higher than it should be because I rescheduled everything for a second scan some years ago. 2) Guess when 'many queries on one subscription' was added. :^)
>>17507 >opus codec Opus is a troublesome codec, even with ALL codec packs installed it has no sound in Linux... for more than year already.
>>17511 >for more than year already Actually, the problem is older, I would say at least 3 years.
>>17510 >Chart Pretty cool. Thanks OP.
(21.37 KB 616x482 ClipboardImage.png)

Is running programs with files as parameters on the to-do list? I get zip files from some subscriptions and I thought it would be cool to automatically unzip them and copy the tags, but then I noticed that just having a generic "run program on these files and do something with the tags and the result" would be better. >>17509 Mine has been a mostly constant mess. I hate it.
(8.79 KB 1878x966 Files.PNG)

(1.34 MB 10000x9052 files.png)

(957.35 KB 10000x7260 hydrusFilesLargeStroke.png)

>>17510 Nice, I played around with making these a couple years ago, but I never bothered to keep my script maintained. It seems my collecting has gotten worse since then.
(7.25 MB 1280x720 Phantasmagoria.webm)

>>17511 ffmpeg has had opus support proper for years,are you sure you aren't mixing it up with something else? (vid related has opus audio) AV1 support is a lot more niche everywhere though,so that might be the failure point.
>>17505 I haven't had the time to try it yet, but I'll report my findings on the weekend, thank you
>>17498 >- added a 'BUGFIX' checkbox to 'gui pages' options page that forces a 'hide page' signal to the current page when creating a new page. we'll see if this patches a weird error or if more work is needed I'm >>17480. This worked! Thank you!
>>17516 >ffmpeg has had opus support proper for years I found the problem. Your sample OPUS sound video plays perfectly, the sound is loud and clear. Then investigating why my videos have no sound, I found out that those faulty OPUS videos have sound only on the left channel but not on the right, my laptop has the left speaker dead, and that's why I was leaded to believe the problem was that codec. Yup, after years using this machine, just now I found out that one speaker is dead. Thanks for your sample anon.
(4.90 KB 802x582 chart.png)

>>17510 Looks like it doesn't account for weirdos like me who aggressively prune. Deleted and inbox seem to be cut off. Maybe there could be options to fit all three on the chart, or just to try and fit one to focus on it. I expected files to look like that, it's a shame I lost my old database and had to restart last year though.
(156.94 KB 468x413 bones.png)

>>17520 Mr. Bones for reference.
does your entire database live off a 8gb thumbdrive
(37.50 KB 500x500 ClipboardImage.png)

I use Hydrus 457 from AUR on Arch. Which major features was added since 457 to 477? Do i need to do anything before i update (except backup ofc)? How do i know generally if i need to do something if i don't update for 20 patches (except for reading all 20 patches ofc)?
>>17522 Nah, I just don't see the point of keeping files that I never look at. I'm not using Hydrus as a download-fucking-everything archive of everything I've ever seen online, but as a curated collection of high-quality files. I usually run a few downloaders with broad searches (I very rarely browse boorus directly) and then run them through the archive/delete filter, most gets trashed. Every once in a while I'll run through my archive with the filter as well to get rid of files that seemed decent at the time but upon closer inspection aren't worth it. It makes backing up less of a pain too.
(212.28 KB 1920x1080 K's files.jpg)

Thanks!
retard here how do i restore files from my database back to hydrus? i accidently deleted files i didn't meant to and i believe they still exist in the db
>>17526 if it was recent, they should still be in the trash domain. You can untrash them from there.
>>17524 I kinda wish I had the fortitude to delete more of my collection. The scares of sadpanda briefly going down reinforces my hoarding mentality though
>>17526 If you permanently deleted your files, you can get the URLs by going to files like you were going to view them, then select all known files. In the search box, select system:file service. Then, is, deleted from, all local files. That should bring up the history of all the files you have deleted. Now, just copy all their URL's, and then stick that into the URL downloader. It will redownload those files if they are still available.
>>17507 >>17508 Thank you for this report. Can you send me the file, or point me to where I can get it, or is it private? As >>17516 says, AV1 has limited support, although most modern players should be ok with it, I'd have guessed MPV would be fine. Also, I am not 100% confident on this, but I believe a webm is strictly: 1) A matroska file 2) vorbis or opus audio 3) vp8 or vp9 video Or at least I think that is what I read on some google spec some years ago. All webms are matroskas, but not all matroskas are webms. I think it used to be just vorbis/vp8, and then they added the others in an expansion, I guess when they knew they could get hardware decoding for them on phones. Things may have changed though, I know google are pushing AV1 (in DASH form or something?) on youtube much more now, so maybe they will roll it into webm when phones can decode it faster or whatever.
>>17530 Hi devanon, would it be possible to have the hydrus media viewer report the file dimensions in the window name? I was thinking of some way of automatically shifting landscape / portrait files between my horizontal and vertical monitors
I've made a few downloaders already, but a question remains in my mind: how does the URL class know what parser to call? Is it based on the examples from the parser? Maybe it's in the documentation and I missed it; if so, sorry about that.
>>17513 >>17514 >>17515 >>17520 >>17521 >>17525 Thanks for all these! >>17520 I messed up the inbox logic a bit also, I'll fix it this week. I'll see if I can fix the axes for guys with a lot of deleted, too, although if you have a billion deleted, maybe we don't want to collapse your 'current files' into a thin line. I am impressed that some people just have the discipline to keep their input and output balanced. I've always failed at this, so much that I wrote a bunch of software to help me fail even harder. >>17523 Lots and lots of little features, no special instructions I don't think. I guess, to see if there are any special things, you can give a quick scan of the changelog here https://hydrusnetwork.github.io/hydrus/old_changelog.html and just look at the section headers. I make sure that anything super important will be highlighted there top thing for a particular release. As always though, I recommend people run their backup before they do any update. Then there's no worries to trying any update--if it goes wrong, you can just rollback and dive into the changelog if you need to.
>>17531 Sure, this sounds neat! I'll put the normal info string up there. I'd like to know how you get on with this. >>17532 Yeah, I link it all with the example URLs. Hit up network->downloader components->manage url class links to see what the client currently has linked together. You can change them manually, but the client tries to guess what should go with what based on what example URLs a parser has and which URL Classes are the best match for them. The 'try to fill in gaps...' button on that dialog fires that auto-match routine if you want to test it out. Btw I fixed an important bug in that auto-matching this this week. Since multiple URL Classes can match the same URL, I also need to do a precedence, which generally follows the rule of the more complicated the URL Class, the higher priority it has to be the match. For instance, with these two URLs: https://www.hentai-foundry.com/pictures/user/Tixnen (Gallery page) https://www.hentai-foundry.com/pictures/user/Tixnen/952616/Leyloria (Post page) Both HF Gallery and Post URL Classes will technically match the second, so the more complicated URL Class, Post, will be chosen. I'm semi happy with how this has all worked out. URL Classes are pretty overloaded with options now, so I think in an overhaul of the system I may rethink things a little, maybe add some more flexible rules with conditions or something. Also, regex domain matching would be great!
>import ptr >like a quarter tops of my db has tags O-ok. Guess I'm downloading Hatate, I really hope it works like I think it does. As a random, very low priority suggestion a "newfag/retard troubleshoot FAQ" might be neat. I freaked out for like 10 minutes wondering why my images weren't showing when I typed tags in only to realize it was because I had accidentally clicked "exclude current tags," a button I hadn't even properly parsed as existing until I figured it out.
hi, i am new to hydrus and it is seriously the best tool for its use. My question is, i tried setting up a local booru but all I get is an ascii women and this message: This is local booru, a client local booru. Software version 477 Network version 20 It responds to requests from any host. I could not find any documentation on how to get past that and have the local booru working. I am also new on linux thanks for your time
>>17536 You need to provide a page key in the url to your booru, and from there it will serve the particular booru page you set up in the services window. That said, the booru is a legacy feature at this point, and I believe it won't be seeing much or any support in the future. Consider setting up the client API instead and using an app like Hydrus Web to browse your collection on the web.
>>17535 A tip about Hatate: If your images end up being found on a booru, consider sending the matching file page to hydrus rather than just importing the tags directly to the files you have in there. Doing it this way gives you a url link to the source and can sometimes mean you get a higher quality version of the image or have a chance to find parent/child images too. The duplicates filter can sort out the details.
>>17537 thanks, i am following the isntruction on the hydrus website In my ressources monitor i can see the server from hydrus. but in the hydrus server administration service when i do the test address or i create the account with the init registration token it cant seem to find the server
I'm manually toggling a tag on images using a keyboard shortcut. Is there a way to know which images I've reviewed previously so that I don't end up viewing the same image again? I have to close hydrus each day and lose track of where I was in this endeavor. Essentially, I want something like the archive/delete filter, but for applying a tag OR removing the image from my queue if the tag isn't applicable.
>>17538 That's actually a good tip. I was just gonna be lazy with it, but thinking about it I have a non trivial amount of not-optimal res images because people on halfch*n LOVE uploading samples.
I had a good week. I cleaned a heap of code, fixed some bugs, brushed up the new file history graph, and wrote some small extensions to the Client API. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=eGybwV3U9W8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v478/Hydrus.Network.478.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v478/Hydrus.Network.478.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v478/Hydrus.Network.478.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v478/Hydrus.Network.478.-.Linux.-.Executable.tar.gz I had a good week mostly fixing some bugs and cleaning things up behind the scenes. There's nothing super big to highlight, but I did improve the new file history chart (help->view file history). The axes are a bit nicer, and I fixed a small counting logic bug in the 'inbox' line. full list - misc: - if a file note text is crazy and can't be displayed, this is now handled and the best visual approximation is displayed (and saved back on ok) instead - fixed an error in the cloudflare problem detection calls for the newer versions of cloudscraper (>=1.2.60) while maintaining support for the older versions. fingers crossed, we also shouldn't repeat this specific error if they refactor again - . - file history chart updates: - fixed the 'inbox' line in file history, which has to be calculated in an odd way and was not counting on file imports adding to the inbox - the file history chart now expands its y axis range to show all data even if deleted_files is huge. we'll see how nice this actually is IRL - bumped the file history resolution up from 1,000 to 2,000 steps - the y axis _should_ now show localised numbers, 5,000 instead of 5000, but the method by which this occurs involves fox tongues and the breath of a slighted widow, so it may just not work for some machines - . - cleanup, mostly file location stuff: - I believe I have replaced all the remaining surplus static 'my files' references with code compatible with multiple local file services. when I add the capability to create new local file services, there now won't be a problem trying to display thumbnails or generate menu actions etc... if they aren't in 'my files' - pulled the autocomplete dropdown file domain button code out to its own class and refactored it and the multiple location context panel to their own file - added a 'default file location' option to 'files and trash' page, and a bunch of dialogs (e.g. the search panel when you make a new export folder) and similar now pull it to initialise. for most users this will stay 'my files' forever, but when we hit multiple local file services, it may want to change - the file domain override options in 'manage tag display and search' now work on the new location system and support multple file services - in downloaders, when highlighting, a database job that does the 'show files' filter (e.g. to include those in trash or not) now works on the new location context system and will handle files that will be imported to places other than my files - refactored client api file service parsing - refactored client api hashes parsing - cleaned a whole heap of misc location code - cleaned misc basic code across hydrus and client constant files - gave 'you don't want the server' help page a very quick pass - . - client api: - in prep for multiple local file services, delete_files now takes an optional file_service_key or file_service_name. by default, it now deletes from all appropriate local services, so behaviour is unchanged from before without the parameter if you just want to delete m8 - undelete files is the same. when we have multiple local file services, an undelete without a file service will undelete to all locations that have a delete record - delete_files also now takes an optional 'reason' parameter - the 'set_notes' command now checks the type of the notes Object. it obviously has to be string-to-string - the 'get_thumbnail' command should now never 404. if you ask for a pdf thumb, it gives the pdf default thumb, and if there is no thumb for whatever reason, you get the hydrus fallback thumbnail. just like in the client itself - updated client api help to talk about these - updated the unit tests to handle them too - did a pass over the client api help to unify indent style and fix other small formatting issues - client api version is now 28 next week
[Expand Post]I am feeling good about multiple local file services. Most of the cleanup this week was for that, and now there are only about three things left to do before we can start playing with it for real--UI and some importer code to handle imports to multiple locations, UI to present deletes and undeletes for multiple locations, and UI and db code to do move/copy across locations. I'll push on these in the coming weeks. Next week will be a 'small jobs' week, and I would like to catch up on github issues in particular.
(8.22 KB 1920x1080 chart.png)

>>17543 Here is what the chart in >>17520 looks like in v478, full screen. Sturgeon's law holds strong, though I think he was a bit conservative with his numbers. I don't know how feasible it would be, but I think it would be best if there were checkboxes for each line. Certainly not a high priority request, just think it would be nice.
Hey devanon, Is there a way to find all the duplicates that have been downloaded? I've looked at your duplicates finder, but it looks like it's for ones that are very close together. I know it's because it will take a LONG time to find exact hash matches, especially in a collection as large as mine ( over a million pics ). What I was wanting is a system that could, say, take a hash, and then search through the database for other same hashes ( as I'm guessing, if 2 or more pics have the same hash, they are the exact same pic? ). And if there are 2 or more exact duplicates, to delete the duplicates, and merge all the tags together. I know this would take a LOT of time and computing power, but I've got the time to spare right now. Any way it could be implemented? Thanks!
Hi devanon! You mention that if I have a disk accident but still have my databases, there's a "try to re-download missing files from sources" feature which I think is extremely cool. I have a huge mechanical disk for most files and a small-ish but very performant SSD where my databases live, as well as _some_ files. I do not know how "this file goes to this disk, that file goes to the other" is decided (under the "weight" system) but I was wondering if there was any way of only storing files that have a source on one disk (which I could afford to lose as I could theorically get them back easily) and all files without a source on the other disk (which I'd triple backup with my databases). Does it make any sense? Of course I know that having backups of everything is best, but storage is expensive these days. Thanks for all your work, devanon!
Does Dev like anime titties?
Two questions: 1. How do I search for files only from a specific site in my DB? This includes files imported using the simple downloader 2. How do I search for files that contain ZERO tags in my DB? Thanks.
>>17548 1. would be "system:known url" and 2. would be "system:untagged".
>>17535 >As a random, very low priority suggestion a "newfag/retard troubleshoot FAQ" might be neat. Yeah, my help gets a bit wordy. It works for some people but not all. A user helped me recently rework all the help to a new templated system that is easier for other people to edit, and some other users are going to be fleshing out a simpler 'what you need to know' guide for people who want the cliff notes (and ESL people, who I tend to nuke with bullshit words by accident). There's the beginning of it here with this recently inserted page: https://hydrusnetwork.github.io/hydrus/gettingStartedOverview.html I expect it to expand in the coming weeks and months. I hope to be reworking my own help, splitting things up now we have nicer tables of contents and hopefully hopefully updating some of my ancient screenshots. >>17536 >>17539 Hey, just to let you know, the local booru and the server are actually different things, and they are both advanced. I do not recommend them for new users. Some other users actually wrote this document to give to people asking about it: https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html The local booru is simpler though, and I can help you get it running. It runs from the client, not from the server, so you can kill the server process. To add a new share, select some thumbnails and then right-click->share->on local booru. This gives you a dialog to set a name and stuff if you want, and then a space to copy the internal/external (to your network) links. You can test with the 'internal' like, and send the 'external' to your friends. Hit services->review services->local->booru->local booru to see all your shares and the links again. Hit up services->manage services and your 'local booru' edit window to customise a host override when you copy an external link. (e.g. you can replace your external IP address with a no-ip.org redirect or similar). If you are not confident on a link might be different inside and outside a network, no worries. I'll just say turn the local booru off for now and have more of a play with the regular client. The local booru is old code that I threw together. It isn't very good, and some users are working on much better full booru replacements now that use the new Client API. These new replacements will allow tag search and all sorts of cool things, rather than my old local booru which is really just a gallery page to share. Let me know if you have any more trouble!
>>17540 I would like to write a custom archive/delete filter, both in terms of the workflow of 'action and move on', and also adding new sorts of 'inbox', but this has been a long time thought and it has not happened yet. It will be a lot of work, so I can't promise a timeline. So, your best bet for now is to figure out a custom workflow that works for you using the metadata we can already edit. One thing I used to do is give all files of a certain sort a quick 'to process' tag on 'my tags'. Something like 'filter this for good stuff'. Then I load up a search page like filter this for good stuff system:limit=256 And then I go through it, setting ratings or whatever, and then untag them (with a shortcut) when I am done with the file. Basically we are copying the way the inbox works here. These days I don't use a tag but a 'like/dislike' rating, which you can click on/off a bit easier than a tag. If you are new to them, you can add new ratings under services->manage services. I personally find them more useful as markers for things like 'this is cool to post as a reaction image in a thread' or 'read this later' than actual ratings, ha ha, but there you go. Another option, if your system always applies a state (e.g. if your processing queue is to add gender, so you are always adding one or more of male or female or futa or whatever), then you can have a search page that is: -male -female etc... -or- -gender:anything If you are using namespaces. Then by doing the work you automatically exclude it from the queue. But this doesn't work for everything.
>>17544 Thanks. Yeah, I agree--it is cool to be able to see that, but not all the time. QtCharts is super powerful, so I think adding some checkboxes or something will be doable. >>17545 I would like to build this system. In the recent months, I have improved my 'pixel hash' matching system, which I think is what you are really talking about here. I now have the ability to determine if two images have exactly the same pixels. Don't worry about database speed, I can do this super fast. I now have this tech in place when you search for duplicates to process in the filter. You can say to include pixel dupes in the queue or not, and the filter itself says when two files are pixel dupes. My plan has been that once we know this system works well (and I haven't had to bug fix it in a while, so I feel good now), I can start on an automatic system, the first automatic system in duplicates, that will, if the user wants, start to apply duplicate actions behind the scenes and even on file import, based on custom rules the user decides. The best example I can think of is a png dupe of a jpeg. You never want to keep png in this case. Clipboard.png posts of real images are a bane, and I would love a system that recognised these pairs and automatically dupe merged to the jpeg and threw away the png. I expect to hardcode this first rule. Once we have that system in place and are happy it works, I can start generalising the ruleset and eventually we'll have a client that can make many duplicate decisions automatically (but always optional, only ever if the user wants), leaving the more complicated and interesting problems to you. It'll need a lot of work though, mostly in better merge tech and all the UI needed. As a side thing, I'll integrate file search for 'this file has a pixel dupe' at some point too, in to system:file relationships.
>>17546 Unfortunately this isn't possible yet, but it might be in future. At the moment, if you look in your file folders, you'll see 'fxx' and 'txx' sub-folders, 'f' being files and 't' thumbnails. The xx goes from 00 to ff, which makes 256 folders, and if you look inside, every file in there will start with that prefix. So 523390d4dfdea768724f4b3715d02a6eab653877d6108b865513d78d04646048.apng goes inside 'f52', with a thumbnail in 't52' and so on. That longass hexadecimal string is the file's hash (SHA256). If you aren't familiar with hashing, it is basically a fixed id for a file. If I look at a file and generate its SHA256 hash, I get the same result as if you do it, so we can talk about the same file without having to share the whole thing. The tech behind the PTR and other hydrus stuff works on this concept. Hashes don't change, so unfortunately that means hydrus storage is fixed to each sub-folder. Which folder each file goes in is pseudorandom but fixed. At the moment. In future I'd like to spend some more CPU time figuring out where things go. A lot of users want to put their archive on slow cheap storage and inbox on nicer storage. I can't promise when this will happen. Your best bet, if you want to not worry about losing things, is to maintain a regular backup of everything. I know it sucks, but you can get a 4TB WD Passport for less than a hundred bucks, so in man hours and stress it really is cheaper than wangling yourself through a complicated system. Here's my nagging help about backups if you would like any help with setting it up, and you can read my sob story that makes me so bananas on this topic: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up >>17547 I like anything if the anatomy is drawn with skill. Twice as much if she has elf ears. >>17548 Hit up 'system:number of tags' to see 'system:untagged'. It'll be a button on the panel.
>>17553 >I like anything >anything HMMM...
>>17553 >Your best bet, if you want to not worry about losing things, is to maintain a regular backup of everything. I know it sucks, but you can get a 4TB WD Passport for less than a hundred bucks, so in man hours and stress it really is cheaper than wangling yourself through a complicated system. I'm not the one you replied to, but I usually make a backup of my DB on the same hard drive the DB is on, then immediately compress the entire backup folder to a tar file, then back it up to three different flash drives. Is this a good backup practice?
>>17552 Thanks man! Look forward to a real pixel duplicate kill system. I'm sure I've got a ton of dupes.
>>17552 >The best example I can think of is a png dupe of a jpeg. You never want to keep png in this case. Wouldn't the PNG be higher quality? Why wouldn't someone want to keep the PNG rather than the JPEG?
>>17552 How do I get Hydrus to check for duplicate images?
>>17558 pages → pick a new page → special → duplicates processing
>>17557 you misunderstood what anon wassaying >there is a jpeg source image >someone lazily screenshots the jpeg and saves it as png >this bloats the filesize without increasing quality >converting the new png screenshot to jpeg at this point will make lower quality than the original jpeg its best to just delete the pngfile, it should have never been created to begin with
>>17551 I'm currently using the third method you listed where it's a page showing images missing the tag I want to add. Then I press a shortcut key to apply xyz_tag or a not_xyz so I know don't end up viewing that file again. My adhoc system would be a step closer to the archive/delete workflow if my xyz_tag shortcut could doubly be used for 'next image'. Would allowing multiple actions for a shortcut be something you're interested in adding?
I know it's a very new file format but JXL file support would be cool. I was disappointed (but not surprised) that I couldn't import the ones I have currently.
>>17555 The usual backup cliche is the 3-2-1 rule: 3 copies of the data, on at least two different kinds of media, with at least one stored off site. One way you could do this is by having one copy on a different internal drive, another copy on an external hard drive/SSD, and a third copy on a flash drive with either external backup being stored fairly far from you (friend's/parent's house, etc). That would be 3-2-1 if the internal and external were both the same type of drive or 3-3-1 if they were different (HDD/SSD). In general the higher each number is the better, but 3-2-1 is supposed to be the bare minimum for data you consider really important That said, your current method is probably better than most people's. I'm only doing 2-2-1 now that I think of it.
One problem I've noticed with the quick and dirty processing is clicking the "show some random potential pairs" is that they're actually random, so clicking it may show you the same set multiple times which I guess is intended behavior but it slows down progress. Replacing it with a deterministic method of traversing the list of potential pairs would be nice.
regarding import folders, what's the difference between disabling the "check regularly" checkbox and enabling the "currently paused" checkbox. they both stop hydrus from autoimporting from the folders, right? so what's the difference? I didn't see a tooltip or info on the help pages about this.
I had an ok week with a variety of work. I fixed some bugs, tweaked some UI (including a neat change to shift+select of thumbnails), improved the new file history chart, and added to the Client API. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=P7MsTw9s03o windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v479a/Hydrus.Network.479a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v479a/Hydrus.Network.479a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v479a/Hydrus.Network.479a.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v479a/Hydrus.Network.479a.-.Linux.-.Executable.tar.gz I had an ok week doing a mix of work. highlights I made it so when you shift-select some thumbnails, you can now move 'back' to deselect what you just selected. This also remembers what was previously selected before the shift-select started, so it works basically like an undo. I like how this works, but as part of it I had to make every thumbnail 'hit' focus in the preview viewer, which is not how all selects worked before. I already find this annoying, so I think I am going to make the system more clever and add some options around this behaviour. I think I improved the duplicate filter's zoom locking, particularly when one of a pair is portrait and the other is landscape. It should generally be more 'stable' now, but let me know if you still have any trouble. WebPs should now show transparency correctly! The new file history chart has another round of better math, an 'archived files' line, and you can hide the deleted files line if it is too big. I reworked a little help to make some 'ok, I know the basics, what next?' things clearer to find. If you missed learning about the autocomplete dropdown, tag wildcards, or OR searching, please check here: https://hydrusnetwork.github.io/hydrus/getting_started_searching.html full list - misc: - when shift-selecting some thumbnails, you can now reverse the direction of the select and what you just selected will be deselected, basically a full undo (issue #1105) - when ctrl-selecting thumbnails, if you add to the selection, the file you click is now focused and always previewed (previously this only happened if there was no focused file already). this is related to the shift-select logic above, but it may be annoying when making a big ctrl-selection of videos etc.. so let me know and I can make this more clever if needed - added file sort 'file->hash', which sorts pseudorandomly but repeatably. it sounds not super clever, but it will be useful for certain comparison operations across clients - when you hit 'copy->hash' on a file right-click, it now shows the sha256 hash for quick review - in the duplicate filter, the zoom locking tech now works better™ when one of the pair is portrait and the other landscape. it now tries to select either width or height to lock both when going AB and BA. it also chooses the 'better' of width or height by choosing the zoom that'll change the size less radically. previously, it could do width on AB and height on BA, which lead to a variety of odd situations. there are probably still some issues here, most likely when one of the files almost exactly fills the whole canvas, so let me know how you get on - webps with transparency should now load correct! previously they were going crazy in the transparent area. all webps are scheduled a thumbnail regen this week - when import folders run, the count on their progress bar now ignores previous failed and ignored entries. it should always start 0, like 0/100, rather than 20/120 etc... - when import folders run, any imports where the status type is set to 'leave the file alone' is now still scanned at the end of a job. if the path does not exist any more, it is removed from the import list - fixed a typo bug in the recent delete code cleanup that meant 'delete files after export' after a manual export was only working on the last file in the selection. sorry for the trouble! - the delete files dialog now starts with keyboard focus on the action radiobox (it was defaulting to ok button since I added the recent panel disable tech) - if a network job has a connection error or serverside bandwidth block and then waits before retrying, it now checks if all network jobs have just been paused and will not reattempt the connection if so (issue #1095) - fixed a bug in thumbnail fallback rendering - fixed another problem with cloudscraper's new method names. it should work for users still on an old version - wrote a little 'extract version' sql and bat file for the db folder that simply pull the version from the client.db file in the same directory. I removed the extract options/subscriptions sql scripts since they are super old and out of date, but this general system may return in future - . - file history chart: - added 'archive' line to the file history chart. this isn't exactly (current_count - inbox_count), but it pretty much is - added a 'show deleted' checkbox to the file history chart. it will recalculate the y axis range on click, so if you have loads of deleted files, you can now hide them to see current better - improved the way data is aggregated in the file history chart. diagonal lines should be reduced during any periods of client import-inactivity, and spikes should show better - also bumped the number of steps up to 8,000, so it should look nice maximised on a 4k - the file history chart now remembers its last size and position--it has an entry under options->gui - .
[Expand Post]- client api: - thanks to a user, the Client API now accepts any file_id, file_ids, hash, or hashes as arguments in any place where you need to specify a file or files - like 'return_hashes', the 'search_files' command in the Client API now takes an optional 'return_file_ids' parameter, default true, to turn off the file ids if you only want hashes - added 'only_return_basic_information' parameter, default false, to 'get_metadata' call, which is fast for first-time requests (it is slim but not well cached) and just delivers the basics like resolution and file size - added unit tests and updated the help to reflect the above - client api version is now 29 - . - help: - split up the 'more files' help section into 'powerful searching' and 'exporting files', both still under the 'next steps' section - moved the semi-advanced 'OR' section from 'tags' to 'searching' - brushed up misc help - a couple of users added some misc help updates too, thank you! - . - misc boring cleanup: - cleaned up an old wx label patch - cleaned up an old wx system colour patch - cleaned up some misc initialisation code next week Next week is a medium sized job week. I would like move the 'notes' system forward. Top priority is to get some preview of notes on the media viewer, next to think about is duplicate file note merging and parsing notes from sites.
(89.29 KB 736x736 ebb.jpg)

>>17567 >I would like move the 'notes' system forward. Top priority is to get some preview of notes on the media viewer This.
What's the easiest way to share a hydrus db/files instance between two different users on a shared network drive without causing concurrency / data integrity issues?
>>17567 >I had to make every thumbnail 'hit' focus in the preview viewer This is exactly how I want it to work, but since you're saying you find it annoying I'm guessing it's not gonna stay this way. I hope it becomes a configuration setting before you revert this behavior, because it's already making some of my file tagging jobs easier.
should i stop it or will it corrupt the database?
>>17559 Thanks!
>>17567 >Next week is a medium sized job week. I would like move the 'notes' system forward. Top priority is to get some preview of notes on the media viewer, next to think about is duplicate file note merging and parsing notes from sites. Would you consider adding a function parsing notes from a text file something like hydrus parsing tags from text files now
>>17571 You're fine I've done it hundreds of times. Your worrying about it suggests that you aren't backing things up regularly though. if that's the case then that's your most important issue atm
is there a way to manually add tags to a group of files, but not if the tag was previously deleted. Like performing a search then selecting all files to give them tags, but acting like downloaders do where they won't add the tag if the tag was already deleted from the file. Also is there a way to search for deleted tags specifically as part of a bigger search query? like say files that have the "blue hair" tag and also have the "female" tag deleted?
Accidentally quit hydrus while it was starting and upon starting again all my pages had been closed. A problem since I had a page with dozens of creator tags filtered out and I couldn't get it back from undo > closed pages or anything. Will pressing pages > sessions > append session backup > exit sessions > (some session) restore my pages or will it do something completely else? I couldn't find any documentation about sessions.
>>17576 Ya it's there to prevent exactly those kinds of situations. Hydrus's documentation isn't very good, but that's mostly because it's both a huge application with a lot of feature and also a constantly moving target.
>>17577 That saves me a ton of work, thanks!
>>17574 just downloaded the ptr before importing my shit to test if it will tag existing entries automatically, dont really want to do it again but ill have backups set up on my nas later
>>17555 >Is this a good backup practice? Yeah, I think you are good, maybe overkill to go to all three at once, unless you mean it is spread out over the drives. You have probably seen how well the database compresses, too. Should be able to get about -75% at least with a decent 7zip run. I personally do something more like >>17563 . For my different storages, I have a USB drive or network PC that I back up to once a week, and then once a month I back that up to a second 'cold' backup that sits in a safer place. Anything can break at any time and I'll still have at least two copies and worst case I lose a week of work. I migrate my drives from main storage to hot backup to cold as they get older. Even though it sounds excessive to have three copies, when something does break and you are currently stressed, you feel a lot different restoring to a new drive over fourteen hours when you have two of the thing remaining rather than just one. But for most people, especially those starting out, I think one backup once a week is completely fine. Like exercise, the details are far less important than keeping the schedule. >>17557 >>17560 Great example of this is when you are in a thread and see a 'Clipboard.png' filename. That happens when you 'paste' into an upload form, it is the filename your browser gives the file it pulled from your clipboard. This is fine when it is a screenshot of UI or something, something fresh that compresses great to png, but many times it is someone who went 'copy image' on a normal nice jpeg of an anime girl from a program like discord and pasted into the upload form. It pulls the raw bitmap data and makes a new lossless file in a super inefficient format. Just a pet peeve of mine, so I'll attack them first.
>>17561 >Would allowing multiple actions for a shortcut be something you're interested in adding? Yes, definitely. I still have a lot of shortcut work to do before I am ready, but I do want it. Anyway I can let people lego-brick their way to custom filters is fine by me. Since you are into this, make sure you are help->advanced mode and then hit file->shortcuts. You'll see 'custom shortcut sets', which are sets you can turn on and off in the media viewer with the little keyboard icon on the top hover. That whole system originally spawned from a first attempt at custom filters (with custom shortcut sets while they were alive). You might like to play around with some custom sets that assign 1,2,3,4,5 keys to tags or ratings or something and try turning them on and off for a bit. I'll expand this in future as shortcuts gets better integrated, so if you like it, let me know what you'd like for workflow improvements. >>17562 That's Jpeg XL, right? I'm super interested in that format, but my current block is I can only support what either PIL or OpenCV (or, at a stretch, FFMPEG) can handle. Can you post/point me to some example JXL files so I can test them my end? I think PIL has provisional support, but I haven't tested it yet. It might just be able to read height and width and things but not render. Similar problems with HEIF or AVIF or whatever they are called. Once PIL or OpenCV can do them, I basically just flick a switch, but being so new and generally complicated, it'll take time on their end. Sometimes there's patent bullshit too, but I don't know enough about that end. >>17564 My main hope here is to just improve the filter's queue. I want a whole bunch of options so you can just process pixel dupes or files that are super similar or have a more random set, or focus on finishing one group at a time, etc... The code in the filter is complete hell, and one of these 'medium' weeks I am going to cut it all out and rewrite it, and then I'll be able to work on the db side getting nicer queues. If I get that tech going, maybe I can copy it to the quick and dirty buttons (although really I want to ditch them, they were always a hack patching the filter). >>17565 Sorry, that's a stupid thing on my end where I accidentally exposed some technical stuff to the user. They are both basically the same. But if you say 'doesn't check regularly', you can still fire them manually with the 'check import folder now' menu. The 'pause' state should stop any run and is something I apply after an error. I should clean it up.
>>17569 Database on a shared drive is tricky. If the db is small, I would do this: - Store master copy of database on network - When you want to use it on a client computer, sync the db folder to local - Use the client db on your local machine, it points to files on network - When you are done, sync the local db folder back to network I know a user who does this and it apparently works ok. But if you have a 50GB db then it is probably too clunky. --database on network drive-- This is dangerous because a network interruption can cause database corruption. I know some people who got it to work ok though (with backups to be careful). I was just talking to a person today who was actually running their db on a SMB network drive and figured out settings that seem safe. If you have technical experience with this, you might want to try it out, but as always, make a backup beforehand. The comment they had on the underlying CIFS instance was: "need to disable cifs cache with cache=none and byte range locks with nobrl" Which is greek to me, but you may know more. Main thing in this situation, obviously, is to ensure that only one user connects with the shared db at a time. In the longer term future, I hope the Client API will evolve to allow clients to dial in to each other for seamless file service sharing over API. But I can't promise when that will be ready. >>17571 >>17574 Yeah, most of the time you can kill the process, even when the database is busy, and there will be no problems. It might take a minute or more to boot the next time as SQLite cleans up some remaining gubbins from the unclean exit. There's a small chance that killing the process right at the busiest time of a repository processing run will cause a weird id problem, but I have code to detect this and recover from it. If the current hard drive usage is less than 20MB/s, no worries™. The main issue for drive corruption is a power loss or other hardware failure. If the drive is writing a line of 0s to the disk platter because power is dropping, it doesn't matter how safe SQLite's journalling code is.
>>17573 That is definitely a long term goal. Main thing I want for this week is note preview on the media viewer, then I am going to work on two bits of tech: - Note merging object - Note parsing capability The merging object will basically be 'note import options' and will govern things like how to update an existing note if a new note with different content comes in. If the newly parsed note is just the old note but with appended text, we can probably overwrite no problem. Once I have that tech ready, I'll be able to think seriously about duplicate merging and then parsing notes from sites or external import files. Please keep reminding me about all this. Import and export of data to xml, json, txt, or whatever is something I want to do more of. >>17575 Unfortunately I don't have great support for searching or filtering by 'deleted status'. The one place where I do what you are talking about is when I pull tags from a website. The downloader system won't overwrite a deleted tag, but I assume if you are adding them manually, you are ok with overwriting a deleted status. I'd like to have more search options here. I have to do some other things first, but ideally I will be moving my tag search predicate from the current system, where it is basically text, to a full thing that can have service attached to it (e.g. 'samus aran on my tags') and some other things like storage/display tags and including weird characters like [ or not. So, I can't promise it any time soon, but this is something I would like to eventually support. >>17576 Sorry for the trouble! I didn't know it could do that. I'll write a hook to stop it saving exit/last session if it hasn't loaded the initial session yet.
(73.68 KB 611x477 hydrus exception.png)


>>17473 >running a search with a large file pool and multiple negated tags, negated namespaces, and/or negated wildcards should be significantly faster. an optimisation that was previously repeated for each negated tag search is now performed for all of them as a group with a little inter-job overhead added. should make '(big) system:inbox -character x, -character y, -character z' like lightning compared to before Thanks, much appreciated. t. >>17393 I have a bug to report. >v479 on Linux >Open new page->files->all local files >Select "Sort by file:hash" in top left >Put in a "system:limit is 20" search >Expected 20 images to show up >Instead, no results shown and exception is logged in the GUI Files related.
Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing?
>>17585 >notes I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. In the file->import files dialog, after you've selected your files click on the "add tags before the import" button to open another dialog. This dialog has many more options including importing the filename as a tag under a namespace. By the way Hydrus dev, a standard UI idiom in menus is that when an item in the menu needs further user configuration in a dialog before the item can be considered 'done', the item's name ends with a "...". So the import files item would be called "import files..." and many other items in the dropdown and right click context menus should also end with a "...". Items don't end with a "..." if they don't need any configuration to finish their action after the user clicks on them (confirmation dialogs not counted as configuration). For example, items that open an external link or directory, or toggle a setting on or off, or whose purpose is explicitly to open some dialog and no more (as in the dialog is the end, not a means to an end).
Is it safe to let the downloaders rip and grab entire booru's? If not how do we get around that. We've got several large booru.org domains with 80K+ images that we need to back up. In the past I remember sites were very pissy about scrappers and I'd rather not eat bans or IP blocks for domains I'm still using.
>>17581 >Can you post/point me to some example JXL files so I can test them my end? I can't post any because 8chan itself doesn't yet support jxl, but the canonical example of a jxl file is the one on the community website: https://jpegxl.info/logo.jxl
(150.20 KB 810x358 autocomplete-win.jpg)

>>17499 >>17501 Tag autocomplete on the Client API is working splendidly so far in my little Flask app. A simple (literally copy-paste from the website) HTMX implementation on the front end was all I needed for it to work perfectly. I added a new function to Cryzed's hydrus-api Python module to cover the /add_tags/search_tags API request. I think I got the tag_service_name and tag_service_key parameters correct, but I'm not using them at this point, so I can't be certain. I can share it if that would be helpful, but it's pretty straightforward anyway. A useful addition to /search_tags would be a limit parameter so that only X number of tags with the highest values are returned. Probably easier and faster to do that on the API side than to receive the full list on the front end app and truncate the results.
Could you add namespace relationships like you have tag relationships? It would be helpful to be able to set namespaces as parents of others, so that any tag in a child namespace also turns up in searches for the equivalent tag of the parent namespace. Having siblings for namespace could also help me to remove a lot of redundant namespaces. As it is right now, I just have to do everything on a per tag level, which is tedious and brittle.
Is there any way to get this thing to automatically pull tags from the galleryinfo.txt files in my multiTB hoard of shit I've downloaded off exhentai in the last decade or so?
>>17587 I do HUGE rips off of sites. Just tags I'm interested in, but I go as far back as the site will let me. Some let you go back years, all the way to the beginning. I'm do this on about 6 boorus, and when I began, I let Hydrus go 24/7 for about 3 months. None of the sites seemed to have a problem with it, none of them ever left me messages or banned me. Just don't try to hack past their rules, and you should be ok.
>>17591 You could write a parser for it, and point the parser at that file. I've made a parser for Deviant Art that got tags, but I don't know how you would point it at a file. Also interested how you could get it to read a file.
>>17502 It's been a while, but it seems that the "If file is missing, remove record" didn't work. I ran it and forgot about it, then noticed today they were still there. I ran it again and it said 0 files were missing despite no thumbnail and giving error 101 when I try to open them.
I had a good week. There's a simple first version of showing notes in the media viewer and several quality of life UI improvements. The release should be as normal tomorrow. >>17584 Thank you for this report. I have fixed it for tomorrow!
(99.72 KB 406x433 1.jpg)

Is there something I can do with this error? Also thank you for your hard works!
>>17595 What version of glib2 does the linux build of hydrus use? I recently reinstalled my system and it doesn't want to function because of an error: >>8048 ./client: symbol lookup error: /usr/lib64/libgio-2.0.so.0: undefined symbol: g_module_open_full
https://www.youtube.com/watch?v=R1t6iNG28zI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v480/Hydrus.Network.480.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v480/Hydrus.Network.480.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v480/Hydrus.Network.480.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v480/Hydrus.Network.480.-.Linux.-.Executable.tar.gz I had a good week. Notes now display on the media viewer. notes Notes have always been a slightly hidden system, a bit like ratings were. Today is a step forward to exposing them. Any file that has notes (you can start adding notes to a file by hitting manage->notes on their right-click menu) will now show them in the media viewer, just below the top-right hover window. They get their own hover window too, if you mouse over them. If you click on a particular note, the 'edit notes' dialog opens on it. This is a first version, and a little ugly, but I'm happy we now have something I can iterate on in future. If you are a big notes person, please let me know how it works best and worst for you. If you have unusual font style, size, or colour, let me know if it goes crazy or sizes too short or tall. While working on this, I rewrote the media viewer's hover windows to be more sensible, something I have been planning for a long time. They are now 'embedded' into the parent canvas, which should reduce a variety of jank behaviour--particularly, if you now click a hover, the main media viewer window no longer loses focus. There is still some hackery in the system to clean up, but I hope it'll work better overall for you. Unfortunately, I just did not get to note merge in the duplicates system or note parsing. That'll have to be for the future. the rest A user is working on a neat 'gallery share' system that plugs into the Client API, here: https://github.com/floogulinc/hyshare . It looks like a great replacement for my old 'local booru', so if you are interested in sharing groups of files straight from your client with friends over an attractive booru-like interface, check it out! I copied the 'file log' and 'search log' button menus, where you can do en masse actions like 'retry all failed' and 'export all to clipboard', to both the log review panels and the downloader/watcher list right-click menus. It is now possible to big actions on logs without highlighting anything. Just a small thing, but when you select a gallery in the gallery downloader page, the focus moves straight to the query text input, so you can start typing immediately. full list - file notes and media viewer hover windows: - file notes are now shown on the media viewer! this is a first version, pretty ugly, and may have font layout bugs for some systems, but it works. they hang just below the top-right hover, both in the canvas background and with their own hover if you mouseover. clicking on any note will open 'edit notes' on that note - the duplicate filter's always-on hover _should_ slide out of the way when there are many notes - furthermore, I rewrote the backend of hover windows. they are now embedded into the media viewer rather than being separate frameless toolbar windows. this should relieve several problems different users had--for instance, if you click a hover, you now no longer lose focus on the main media viewer window. I hacked some of this to get it to work, but along the way I undid three other hacks, so overall it should be better. please let me know how this works for you! - fixed a long time hover window positioning bug where the top-right window would sometimes pop in for a frame the first time you moved the mouse to the top middle before repositioning and hiding itself again - removed the 'notes' icon from the top right hover window - refactored a bunch of canvas background code - . - client api: - search_files/get_thumbnail now returns image/jpeg or image/png Content-Type. it _should_ be super fast, but let me know if it lags after 3k thumbs or something - you can now ask for CBOR or JSON specifically by using the 'Accept' request header, regardless of your own request Content-Type (issue #1110) - if you send or ask for CBOR but it is not available for that client, you now get a new 'Not Acceptable' 406 response (previously it would 500 or 200 but in JSON) - updated the help regarding the above and wrote some unit tests to check CBOR/JSON requests and responses - client api version is now 30 - . - misc: - added a link to 'Hyshare', at https://github.com/floogulinc/hyshare, to the Client API help. it is a neat way to share galleries with friends, just like the the old 'local booru'
[Expand Post]- building on last week's shift-select improvement, I tweaked it and shift-select and ctrl-select are back to not setting the preview focus. you can ctrl-click a bunch of vids in quick silence again - the menu on the 'file log' button is now attached to the downloader page lists and the menu when you right-click on the file log panel. you can now access these actions without having to highlight a big query - the same is also true of the search/check log! - when you select a new downloader in the gallery download page, the keyboard focus now moves immediately to the query text input box - tweaked the zoom locking code in the duplicate filter again. the 'don't lock that way if there is spillover' test, which is meant to stop garbage site banners from being hidden just offscreen, is much more strict. it now only cares about 10% or so spillover, assuming that with a large 'B' the spillover will be obvious. this should improve some odd zoom locking situations where the first pair change was ok and the rest were weird - if you exit the client before the first session loads (either it is really huge or a problem breaks/delays your boot) the client will not save any 'last/exit session' (previously, it was saving empty here, requiring inconvenient load from a backup) - if you have a really really huge session, the client is now more careful about not booting delayed background tasks like subscriptions until the session is in place - on 'migrate database', the thumbnail size estimate now has a min-max range and a tooltip to clarify that it is an estimate - fixed a bug in the new 'sort by file hash' pre-sort when applying system:limit next week I would like to push multiple local file services some more. Probably some more infrastructure work in delete and import UI.
(608.93 KB 1366x768 Screenshot_20220406_182040.png)

(765.98 KB 1366x768 Screenshot_20220406_182326.png)

(516.64 KB 1366x768 Screenshot_20220406_182538.png)

(89.35 KB 1023x1023 clapping.gif)

>>17598 Marvelous!!! It is a nice touch that when more than one note is present and clicking on that note, the Note dialog will open showing that precise tab on foreground. Something that at least for me is annoying is that Notes will open with the scroll at the end and not at the beginning as should be expected. See pic 3.
Hey, unfortunately 8chan has had some posting trouble and we have lost a week or so of posts. Since this thread is bumplocked, I was going to make a new thread for 481 anyway, so Hydrus General #4 is here: >>>/t/8151 This thread will be moved to the /hydrus/ archive soon. Thanks everyone!


Forms
Delete
Report
Quick Reply