/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #2 Anonymous Board volunteer 04/20/2021 (Tue) 22:48:35 No. 15850
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
>>16921 Thanks! Hope you can fix the da downloader.
>>16926 I didn't sync to the PTR. I wasn't trying to say the following out loud, cause I'm just "that one guy" I think, and I've mentioned it before, but I only used hydrus to automate downloading from pixiv, and everything else was only imported with a "filename" tag after ripping it with gallery-dl. So a chunk of it doesn't even have URLs to retry. Not that I can eve open hydrus anymore anyway, like I said. >>16928 Thank you for the high effort reply. I don't know how late I'm responding to it, but I probably could have responded earlier if I saw it was in reply to me, sorry. Getting the "(You)" for that post expired for me or something. My "'setup" is just a laptop that can fit two hard drives, and my first reaction to the bad sector was unfortunate. I first posted about it on 4chan's /g/: https://boards.4channel.org/g/thread/84347421#p84353133 https://boards.4channel.org/g/thread/84380844#p84393151 But I was running "HD Tune Pro" to check for bad sectors (which I cancelled at maybe 1/4 completion), cause I didn't understand yet (plus no one on 4chan told me in response to any of my posts) that when a sector fails, all that corrupted data is just treated as empty space to Windows (at least Windows 7, which I was still using, and people criticized my still using). I don't know if that's the case for every sort of failure, but I literally saw first hand when I was moving a rare non-hydrus folder on the drive to my second HDD, that (ignoring possibly corrupted data, which I guess wouldn't necessarily have shown) everything in it succeeded besides a single file. The file couldn't be accessed by Windows. But a day later, the file went completely missing. I only came to the conclusion a few days later that that was because it was overwritten from Windows regarding it as empty space. But someone did reply saying that checking for bad sectors via "HD Tune Pro" like I did was literally reading/writing mass amounts of data to literally every part of the drive it could. But I thought that was only possibly exacerbating the failed sector, not overwriting data. The failing HDD (was also boot drive) was 2TB and encrypted. It took over two days to decrypt it. I couldn't access the veracrypt install folder (viewing its properties showed it as empty), nor could I even consistently read/write data to my HDD, as I mentioned in my first 4chan /g/ post I linked, so I had to install veracrypt portable to my second HDD, and run that to decrypt my boot HDD. Until I realized using the drive was overwriting data, I watched a few streams and youtube videos. I regret that now. I did have the thought that the sheer cost of watching videos was inordinate and should logically be avoided. But I watched maybe two or three hours of video anyway, which obviously is some shit. After the drive decrypted I created an image of it using "Macrium Reflect" to store on an external hard drive. Unfortunately I first merely cloned the drive to the external, hoping to boot from it, to avoid booting from the dying hard drive again (cause my "setup" only being a laptop meant I had to shut down my PC to replace the second HDD with the replacement HDD, to be able to put data on it). But when I tried booting from the clone on the external hard drive, it didn't even try to do so. So I unfortunately had to boot the dying hard drive. It failed to boot normally. Then it failed to boot in safe mode (both times it froze/stalled indefinitely). Then I tried booting it normally again, and it worked, but I had to cancel the "chkdsk" fix it was going to automatically do within six seconds had I not cancelled. So I rebooted the dying hard drive, and only then did I make an image of it. I actually cloned it to my replacement HDD first, then I made the image. It's unfortunate the image was the last thing I made. None of the way I approached this was ideal, looking past the fact I lost data due to my having no backups. Thank you for the patient and comprehensive post, and resources. Cheers. I definitely have so far just given up on archiving. I will try my best to see what can be done to recover the corrupted data on the drive. My boot drive is currently the replacement hard drive I cloned the dying hard drive to, only, I ran "chkdsk" on the replacement before using it, hence the corrupted data showing up as the quantified 23 gigs of empty space in Windows. The dying hard drive hasn't had "chkdsk" done to it, and the image of it hasn't either, nor am I going to fuck with the dying drive nor the original image of it; only copies of it. I did want to say, to anyone reading this, thinking it would never happen to you, this isn't just losing 23 gigs of a single artist; this is losing 23 gigs across everything you have ever archived. Provided recovery is impossible, you can never trust anything you've formerly archived to be a complete body of work ever again. This isn't the same as no longer feeling safe in your home after having been victim to a break-in; this is literally your entire database changing from complete, to your never being sure, in an instant. Let alone that I can't even boot my hydrus, because I lost shit under the hood, so the only "sorting" I have left is the order in which the images were saved. Literally, if you have anything of value, be arsed enough to sell it to afford a backup. Before buying anything else for yourself, buy a backup for your data. I only hope you will never have to fathom what it feels like to have your formerly flawless-integrity data ruined. I've heard horror stories of losing data before. But on some level, I thought I could afford it happening to me. If it were anything but hydrus, maybe you would have something left. To me, everything I've ever done in hydrus is unusable/ incapable of being observed in any way, in an instant. Backup your data.
>>16928 Also I do want to say that the image I made of the dying hard drive using "Macrium Reflect" was with the "exact" copy option, not the default "intelligent sector" copy option. I only even checked for this and made sure I did this because I've gone through smaller hard drives before, and only cloned them to the bigger hard drive, to use the bigger hard drive as a boot drive. My last hard drive was 1TB, and the dying one is 2TB. I would've had a full working 1TB hard drive as an ancient backup had I not wiped it and began using it as a secondary hard drive. Now my latest backup is a full 500GB drive, which is far older. It's still a quarter of my database. Which is something.
>>16931 In hindsight I doubt my 500GB drive wasn't being used as a secondary hard drive. I distinctly remember "upgrading" my secondary hard drive from that to the 1TB one. If I have any backup, it's literally years old.
>>16930 Well, this situation seems really bad. The second you were checking the bad sectors, your data got overwritten by trash. For the future, a good way to monitor your drive is to check SMART values. The drive will usually report the total number of bad sectors by itself, without you needing to check. Just get a smart monitoring tool and put it in autostart. At work, if people tell me their drive is failing, I tell them to yank the power cord and never boot that machine again. An fsck or boot with a hosed drive may be suicide, depending on the filesystem. I would urge you to do the same thing. If you have an image, you have time and can experiment. Until then, treat it like it can die at any second. I can't talk for windows, in Linux you must not access the data on the drive in any way. You only read the bits on the drive, once, to a secure location. Everything else is done with the image file, which is backed up twice, just in case. I am very sorry, especially for sqlite-files, there is probably nothing you can do at this stage, since you already wrote 500GB over the "deleted" data. Add the broken drive that probably wrote random garbage during subsequent boots, and the damage is done. With dbs, a single bit flip can silently corrupt the db and cause failures years down the line. Don't blame yourself too hard for this, as the saying goes: "There are 2 types of people - those that do backups and those that never lost anything important." This is probably the hardest lesson people can learn. Please also remember that /g/ is full of retards, and it is not a tech support board, so trolling is sadly quite common. I would encourage you to keep archiving, since content will keep vanishing. The old files may be gone, but new content is out there, right now, that will be gone next week. Some day, we may have a decentralized and anonymous solution for file sharing, so hopefully you may get some of those files back, from likeminded people that archived it, just like you did. In the meantime, check your drives, check your backups, make sure everything works as expected. Especially with encryption, try to access your backups from a different computer and make sure you have the headers backed up seperately. Hopefully your old drive contains all files that were truly important to you. Good luck!
>>16933 I thought I could cope with it before, but I feel like vomiting again. I'm thankful for the honest reply. But I feel sick.
>>16923 Also try testdisk. Just google for it. A little program that is really good at data recovery on a bad disk. I've recovered a lot of data using it.
>>16934 I know exactly how you feel. I also lost a huge part of my collection due to XFS. No backups, no recovery. I swore I would never let that happen again, so after a couple of days, I set everything up again. New database, all subscriptions again, re-adding the PTR... You just need to do something else for a couple of days. Give it a week and you can start over again. After that, build something truly magnificent. It feels great to be 100% confident in your backup strategy, if you know your data is safe not matter what. If you're like me, you always had a fear of losing everything one day. But this feeling will be replaced by absolute confidence in your system. It feels great, you will soon know this too! At this point, I'm even grateful that it happened with my old data, because that feeling of dread and despair is now gone. Just don't try to do it right now, try to calm down first - it will take a couple of days.
>>16913 Fedora friend, I found an ugly workaround, hopefully this works for you as well; I copied my system's libraries (from /usr/lib64/) to my Hydrus directory (while Hydrus was completely stopped). Be sure to have a fresh backup before doing anything though! Here's what I copied over: - /usr/lib64/libmpv.so.1.109.0 to libmpv.so.1 (comes from mpv-libs-0.34.0-1.fc35.x86_64) - /usr/lib64/libcrypto.so.1.1.1l to libcrypto.so.1.1 (comes from openssl-libs-1.1.1l-2.fc35.x86_64) - /usr/lib64/libgmodule-2.0.so.0.7000.1 to libgmodule-2.0.so.0 (comes from glib2-2.70.1-1.fc35.x86_64) >>16919 Thanks for the reply! I'm not sure how your package script could fix that though; AFAIK it's based off an Ubuntu image, which might just have older packages than what other distributions are using (which should be true for at least Arch and Fedora, probably others). I don't want to ask you to consider changing this; maybe I'll just change up my Hydrus upgrade script to copy what I mention above.
I'm the anon who had put his media files on his storage server about a month ago. If I wanted to download Hydrus on a second or even multiple machines, and have them all access that folder on the storage server is this possible? Or will Hydrus fuck up if it has multiple clients accessing it from multiple devices? Could the Client API be utilized maybe, then all the other Hydrus Network clients just connect to a single one?
>>16920 NTA but that sounds amazing. Is it coming soon? Because I was about to recommend hydrus to a friend but I'll wait a bit if it means that he could start importing to separate sections straight away and not have to relearn the flow.
Hey Hydev! I'm getting 403 errors when I try to load 4plebs threads into the simple downloader. Any idea how I could work around this? I've tried changing the User-Agent header and importing cookies, but neither fixed it.
I had a great week. As well as some little fixes and quality of life work, I improved support for ogv files (oggs with video), and completely reworked the 'presentation' options of importers, making the whole system faster and more user-friendly. The release should be as normal tomorrow.
>>16917 the 2nd option with the tag blacklist worked, ty
https://www.youtube.com/watch?v=Xm3mrwyR2pw windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v463/Hydrus.Network.463.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v463/Hydrus.Network.463.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v463/Hydrus.Network.463.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v463/Hydrus.Network.463.-.Linux.-.Executable.tar.gz I had a great week. I was able to catch up on the work I wanted and do some neat stuff besides. import presentation I completely overhauled the importer 'presentation' options, which you can see under all 'file import options'. These have always been a bit advanced and confusing, so I rewrote it to specifically say a more human-friendly summary like 'show new files' and added text and tooltips to better explain what was going on. It also works a bit faster and cleaner. You can also now say 'show inbox files', regardless of 'successful' or 'already in db', which I think may be neat as the default for a watcher page that you process and revisit several times--every time you were to re-highlight a watcher, it would filter out anything you previously archived. And I updated the right-click menu on downloader and watcher pages' lists, where it said "show importers' new files", to use the new options. It now tucks it into a submenu, adds the 'inbox only' option, and if only one importer is selected it says what the default actually is and removes any duplicate action. other highlights The client now recognises ogg files with video as ogv! Previously, they were always considered audio, but now it will make thumbnails and parse resolution and duration and let you zoom properly with the video player and so on. All existing ogg files will be scheduled for a rescan when you update, so your ogvs should just pop into place. I think I fixed the deviant art tag search downloader. They changed their tag search routine recently, so I had to change some things behind the scenes. Everyone will get it on update, and the existing 'deviant art tag search' downloader should just work again. The manage ratings dialog now has copy/paste buttons for quick copying of multiple ratings across sets of files. This is a prototype, so give it a go and let me know what you think. It copies 'null' ratings at the moment, but maybe we'll want a second option too that only copies set ratings. full list - misc: - ogv files (ogg with a video stream) are now recognised by the client! they will get resolution, duration, num frames and now show in the media viewer correctly as resizeable videos. all your existing ogg files will be scheduled for a rescan on update - wrote new downloader objects to fix deviant art tag search. all clients will automatically update and should with luck just be working again with the same old 'deviant art tag search' downloader - added prototype copy/paste buttons to the manage ratings dialog. the copy button also grabs 'null' ratings, let me know how you get on here and we'll tweak as needed - file searches with namespaced and unnamespaced tags should now run just a little faster - most file searches with multiple search predicates that include normal tags should now run just a little faster - the file log right-click menu now shows 'delete x yyy files from the queue' for deleted, ignored, failed, and skipped states separately - the tag siblings + parents display sync manager now forces more wait time before it does work. it now waits for the database and gui to be free of pending or current background work. this _should_ stop slower clients getting into hangs when the afterwork updates pile up and overwhelm the main work - the option 'warn at this many pages' under _options->gui pages_ now has a max value of 65535, up from 500. if you are a madman or you have very page-spammy subscriptions, feel free to try boosting this super high. be warned this may lead to resource limit crashes - the 'max pages' value that triggers a full yes/no dialog on page open is now set as the maximum value of 500 and 2 x the 'warn at this many pages' value - the 'max pages' dialog trigger now only fires if there are no dialogs currently open (this should fix a nested dialog crash when page-publishing subscriptions goes bananas) - improved error reporting for unsolveable cloudflare captcha errors - added clarification text to the edit subscription query dialog regarding the tag import options there - added/updated a bunch of file import options tooltips
- new presentation import options: - the 'presentation' section of 'file import options' has been completely overhauled. it can do more, and is more human-friendly - rather than the old three checkboxes of new/already-in-inbox/already-in-archive, you now choose from three dropdowns--do you want all/new/none, do you want all/only-inbox/inbox-too, and do you want to my-files/and-trash-too. it is now possible to show 'in inbox' exclusively, at the time of publish (e.g. when you highlight) - added a little help UI text around the places presentation is used - the downloader and watcher page's list right-click menu entries for 'show xxx files' is now a submenu, uses the new presentation import options, shows 'show inbox files', and if you click on one row it says what the default is and excludes other entries if they are duplicates - . - boring presentation import options stuff: - presentation options are now in their own object and will be easier to update in future - the 'should I present' code is radically cleaned up and refactored to a single central object - presentation filtering in general has more sophisticated logic and is faster when used on a list (e.g. when you highlight a decent sized downloader and it has to figure out which thumbs to show). it is now careful about only checking for inbox status on outstanding files - presentation now always checks file domain, whereas before this was ad-hoc and scattered around (and in some buggy cases lead to long-deleted files appearing in import views) - added a full suite of unit tests to ensure the presentation import options object is making the right decisions and filtering efficiently at each stage - . - boring multiple local file services work: - I basically moved a bunch of file search code from 1 file services to n file services: - the file storage module can now filter file ids to a complex location search context - namespace:anything searches of various sorts now use complex location search contexts - wildcard tag searches now use complex location search contexts - counted tag searches now use complex location search contexts - search code that uses complex location search contexts now cross-references its file results in all cases - I now have a great plan to add deleted files search and keep it working quick. this will be the next step, and then I can do efficient complex-location regular tag search and hopefully just switch over the autocomplete control to allow deleted files search next week I managed to fit in some good work on multiple local file services this week as well. Most of the tag search code now works on n file services, with n currently locked to 1, ha ha ha. The next step here is to add a simple cache for all deleted files so I can search their tags quickly. I will move this forward when I next have some time. Beyond that, I want to get around to adding proper ICC support. This is colour-correction data that some files ship with. It mostly appears in camera photos, but some software puts it in normal static images too. I have been talking with a user about this for a while, and I think there is now a path to do it fast and accurately. I'd love to just have natural ICC support, showing it as a bitmap flag or something and just altering colours as the file specifies. It is Thanksgiving in the US tomorrow. Thanks everyone!
>>16915 >Looking at the code, if you turn on help->advanced mode, it looks like the file limits on subs expand up to 10,000. Is that enough? That's enough for now (and until an artist I'm subbed to on e621 hits over 10k and I have some reason to want to re-get all my tags). Thank you! >The problem is the attaching of 5,000+ import objects to every query you are subbed to. [...] So, in my mind, I imagine checking a subscription goes something like this: -Checker decides sub X needs to be checked -Download gallery pages from site for subscription -Check linked files against last file downloaded in DB until you find a match -Once you find a match, download all files that didn't match. What am I missing where you need to load every object a given subscription needs to have? Is it a sqlite limitation (I've only really worked with oracle and postgres)? >Yeah, this is my big hope with a 'review subscriptions' dialog. [...] I wasn't thinking anything so grandiose, just a way to cache subscription changes and wait for them to pause normally, save the cached changes, and restart them in the background. You'd probably need to only allow one cached change (if a user tried to open the dialog again before the first change had been applied, they'd need to wait). Also, I missed this suggestion before, I would really like a way to pull and store text descriptions of files.
is there a way you can list and see all the tags you're using? I've been experimenting a lot with how I tag pictures, I think I've got a system I like, but I the autists in me wants to remove tags I've only used once to clean it up. So is there anyway I can just get a list displaying tags(#)? That I then can just purge useless tags from the DB?
(153.27 KB 1366x698 1.png)

(133.17 KB 951x699 2.png)

(56.54 KB 523x542 3.png)

>>16946 Sure. 1 - Load all files. See screenshot 1. 2 - Click the background anywhere between thumbnails (sectors highlighted with red squares) to get the list. See screenshot 2. 3 - Sort that list by tag count. See screenshot 3.
>>16878 Is there a way to disable ratings? Or they can't be disabled once enabled?
>>16944 Thanks for fixing the Deviantart gallery downloader. I've managed to get it to start downloading the pics now, but I'm still having one problems with it. It won't download the tags. The tag search itself seems to work, but it won't import the tags with the pics. All I get is the creator: and the title: tags. I've checked to make sure my import tags is checked and that the page itself has tags. Import tags from the boorus are working, it's just deviant art. Is there anything else I need to do on my end to get tags importing? Thanks!
>>16943 Thanks for the update devanon! The rating copying works great, but would it be possible to have the manage rating window behave like the manage tags window for the media viewer? In particular, the ability to still scroll through media in the media viewer while having the manage rating window reflect the current image. This will help in streamlining the process of copying a rating to another file.
I tried to use Hatate to tag a lot of my images but I think I fucked up my database as a result. Running the dupes pages tends to spit out this error: FileMissingException No file found at path E:\Pictures\!!!!!Hydrus Database\f5a\5a3437d8836919b5d08098bf8cec46a3d19d47032b1aa95c42434da2fd0c66eb.jpg! Traceback (most recent call last): File "hydrus\client\ClientFiles.py", line 1072, in GetFilePath ( actual_path, old_mime ) = self._LookForFilePath( hash ) File "hydrus\client\ClientFiles.py", line 608, in _LookForFilePath raise HydrusExceptions.FileMissingException( 'File for ' + hash.hex() + ' not found!' ) hydrus.core.HydrusExceptions.FileMissingException: File for 5a3437d8836919b5d08098bf8cec46a3d19d47032b1aa95c42434da2fd0c66eb not found! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\ClientRendering.py", line 223, in _Initialise self._path = client_files_manager.GetFilePath( self._hash, self._mime ) File "hydrus\client\ClientFiles.py", line 1076, in GetFilePath raise HydrusExceptions.FileMissingException( 'No file found at path {}!'.format( path ) ) hydrus.core.HydrusExceptions.FileMissingException: No file found at path E:\Pictures\!!!!!Hydrus Database\f5a\5a3437d8836919b5d08098bf8cec46a3d19d47032b1aa95c42434da2fd0c66eb.jpg! Pic related is when it happens in real-time. And before anyone says shit about the pictures, this was a fuck-up and I didn't think of filtering out foot faggotry at the time (nor do I know how to do it without excluding unrelated pictures).
>>16951 Did you maybe leave hatate's 'Send files to recycle bin once imported into hydrus' (Settings > Hydrus API) option on? If so I'd check your operating system's recycle bin, if it has one.
>>16923 >>16930 >>16934 Hey, I am very sorry to hear your trouble. I know the feeling, trust me. Mine was my 'cool videos' drive dying in about 2006, losing me 75,000 files or so from early internet days. In time I found the fear of it happening again to be a great motivator. If you have any malformed version of the client.db, client.caches.db, client.mappings.db, and client.master.db files, as are normally stored in install_dir/db, there may be some stuff we can recover. Since things are very broken and it won't boot, I suspect the likely outcome--if you want to try--is extracting the good content out manually using SQLite and injecting it into a fresh database. I am happy to help you try to figure this stuff out. If you like, you can email me on a throwaway account (hydrus.admin@gmail.com), and we can work on it one-on-one. In any case, the most important thing to do, if you have those files, is get a backup of them in their current state so we can play around with a copy and see what we can recover. Then please check out "install_dir/db/help my db is broke.txt" as background reading and SQLite specific recovery options. Running a clone is usually a good operation, and even if it truncates a whole heap of information, if we put the work in we can see what is recoverable. No promises, but worth trying. Just make sure you keep that copy of the original files as they currently are.
>>16924 Thanks, this sounds doable. I will see if I can expand the options here. >>16937 Ah cool, thanks. We build on Ubuntu 18.04 at the moment, but it looks like Github can do 20.04 as well. It sounds like we could do a couple tests here to see if that works better/worse for most people. >>16939 It was supposed to be done half way through this year, but then my year fell apart! I am working on it on and off right now and would love to have it all done in Q1 2022. I will be adding easy-import/merge tech to let users who currently have two clients merge them together nice and easy, so your friend might want to start now anyway, or just on part of the problem. I hope for this all to be flexible and stress free and non-complicated once it is in. Let's see how I do!
>>16952 I did, and restoring everything that was deleted in my Hydrus files seems to have made it all work again, thanks.
>>16938 This is not really possible unfortunately. Trying to share file storage is likely to cause some deleted files when a maintenance routine notices a file that it wasn't expecting, and trying to run multiple clients on the same actual database files on a network location will corrupt the database. My long term plan is to do exactly as you say and have clients hook into others using the Client API. This is related to the 'multiple local file services' tech I'm currently working on, too, where I want you to basically be able to search a client remotely and see thumbnails and so on, just as a separate partition besides 'my files'. This will also let you share some of your files with friends across the internet. >>16940 Hmm, I am not sure. I get a 403 when I try a test here too, and the error page looks like CloudFlare, with 'cf' css decoration. The specific error is "This website is using a security service to protect itself from online attacks.", which I am not sure I have seen before. There's a 4plebs note later saying "Automated crawling of this site is unnecessary as all of our content can be downloaded on the <a href="https://archive.org/search.php?query=subject%3A%224plebs%22">Internet Archive</a>", so I wonder if they have activated a very strong CF block. My understanding is the CF strong test requires you to copy browser User-Agent to hydrus exactly, and cookies, and then you are good. If you have done this, I am afraid I have no more expertise. If you use Hydrus Companion, this can automate this process, the User-Agent too, which needs to be exact down to all the little version numbers with the browser that got the 'good' cookies. Let me know what you discover!
>>16945 You are right that it doesn't have to be the way it is, with a bulky object being loaded to run, but basically a subscription is really just my 'downloader' object that uses some different timing tech and check rules, and all the decisions it makes about URLs run through a pipeline that is built into downloader code. Just little stuff like http://some_url is usually equivalent to https://some_url, but all those 'do we have this?' nuts and bolts and related tools run through the 'file log' object you see on any downloader. The 'import objects' stored in that log are richer than the simple file-URLs maps stored in the SQLite database proper, but they can do more in the same way. They also keep a hold of known file hashes, known urls, update timestamps, and tags as parsed at the time of download. A URL is only 60 characters or so, but an import object can be 1-2KB and has some extra CPU to load and save. This is my main hesitance about allowing many thousands of URLs per sub--I'm radically expanding the size of subs just in read/write I/O, and also introducing more chances for large queues to break half way through and not save their work gracefully. Some things in subs are non resumable if cancelled, mostly it is file search, so if a search gets cancelled 10 pages in to a 20 page search, the next time it runs it needs to start over from the beginning. In order to fix these issues and allow bigger subs and keep things smooth, I would have to either write a new import object just for subs that only held URLs and change the rich import object to the slim import object once an URL was done with, or I would have to write a db-side subscription record and change the subscription work code to asynchronously test against that record to run a sub. Both of these would require a whole heap of work, so I haven't invested the time yet. For text descriptions, I would love to add note parsing to the downloader. I have part of this work done. One of these weeks when I have some free time I would like to finish it off.
>>16948 If you don't like the new 'favourites' star rating, hit up services->manage services and then remove the 'like/dislike service' it is listed under. You'll go back to how it was before. >>16949 Yeah, our first version of the DA downloader (and I think the Hentai Foundry and tumblr) downloaders enthusiastically parsed all creator tags, but after some use, we came to the conclusion that most creators tag in formats that were A) unhelpful or B) badly formatted. Many of those sites don't support nicer booru-style tags, or even tags with spaces, so you'll see a lot of [ 'samus', 'aran' ]. And some artists are not familiar with what tags actually are, so they'll just write a sentence in the tag box, like [ 'i', 'am', 'drawing', 'good', today' ]. We saw enough of that sludge parsed that we generally rolled back the parsers to only get username and official title data. If you are feeling brave and you want to tinker with things, you can try altering the hydrus downloader to grab what you want. The help starts here: https://hydrusnetwork.github.io/hydrus/help/downloader_intro.html But I must be honest with you, learning that system can be a pain, and the tags as they shake out in reality are more harm than help in a hydrus client, so I am not sure it is worth the work. >>16950 Thanks, that is a great idea! I'll have a think about this. It may need a bit more work since the manage tags thing is all hardcoded, and if I want to do this with multiple dialogs I should probably generalise the whole thing to a template.
>>16943 The OGG/OGV fix took some time to get going, but it works like a charm now! Thanks dev!
>>16946 >>16947 Just wanted to say thank you, to whoever asked that question and the answer. This was something I was wondering myself.
(309.57 KB 192x192 honking.gif)

>>16960 Glad to know fren.
Out of curiosity, is it possible to browse through all the tags present in the public tag repository? For instance finding out how many instances of tag A exists and perhaps how many files with tag A also has tag B
I had trouble posting this last night. I hadn't realised how long the thread was--I will make a new general today for the release post! I had a good week. I managed to add support for embedded ICCs, which will improve some images' colours, and overhauled how files are deleted from the client and server so the system is smoother and more reliable. The release should be as normal tomorrow.
>>16962 Unfortunately the sheer number of tags--tens of millions--make it difficult to 'browse' in UI, so your best shot for now is to access the SQLite database files manually and run your own statistical queries. I can help you with that if you like. Another simple option is just to run some basic searches on 'all known files'. This is advanced, so be careful of running some super CPU heavy searches, but turn on help->advanced mode, and then change your search page from 'my files' to 'all known files' and the tag domain from 'all known tags' to 'PTR'. Then do some searches, you'll see real tag counts and get ghost results of non-local files if you run a search. It is borked developer-mode stuff mostly for mine or the janitors' use, but you can learn some interesting things. New Thread >>6469


Forms
Delete
Report
Quick Reply