/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

US Election Thread

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #6 Anonymous Board volunteer 12/21/2022 (Wed) 19:28:08 No. 18976
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Users can choose to share tags through a public tag repository if they wish, or even set up their own just for themselves and friends. Everything is free and privacy is the first concern. Releases are available for Windows, Linux, and macOS, and it is now easy to run the program straight from source. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . Hydrus is a powerful and complicated program, and it is not for everyone. If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
Old thread >>>/hydrus/18264
https://www.youtube.com/watch?v=_3ZC45Q82pg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v511/Hydrus.Network.511.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v511/Hydrus.Network.511.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v511/Hydrus.Network.511.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v511/Hydrus.Network.511.-.Linux.-.Executable.tar.gz 𝕸𝖊𝖗𝖗𝖞 𝕮𝖍𝖗𝖎𝖘𝖙𝖒𝖆𝖘! I had a good week fixing some things and adding some new options. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights First off, if you uploaded to v510 and had problems uploading to the PTR, sorry for the trouble! I missed something last week with an object update, and it impacted the network protocol. Thanks for the reports--I figured out a serverside patch and we updated the PTR. I also fixed some logic in 'file import options' where it uses URLs and parsed hashes to predict if a file is 'already in db' or 'previously deleted' before a download. Some edge cases, particularly where many urls were mapped to one file, were not working quite right, and hashes tended to be overly dispositive. Ultimately, this thing should now be better at saving you bandwidth. The options here, which can be seen by advanced users, have updates too--check the changelog and tooltips for explanations. If you have a large monitor and UI scale >100%, your thumbnails have looked bad for a while. Please check the new 'Thumbnail UI scale supersampling %' option under options->thumbnails. Set it to your monitor's UI scale, and your thumbnails should regen to be crisp. Let me know how it goes! If you are into clever tag searching, check out the new 'unnamespaced input gives (any namespace) wildcard results' option under services->manage tag display and search. This is a first attempt to bring back the old 'unnamespaced search tags search namespaced variants' tech we used to have. It makes it so typing in an unnamespaced tag will give you the '*:subtag' variants quickly. These wildcards are labelled specially now, too. next week I am taking my Christmas vacation week! I'm going to spend some time with family and get lost in vidya. The week after, I'll see about sinking my teeth back into the serverside tag filter. v512 should be out January the 4th. Thanks everyone!
>>18978 Can't wait to check out the newest update! >>18972 I finally got the ourobooru archive uploaded, I put all pertinent info in the readme, but I'll restate it here: I have probably 99% of the files and the sidecar files should be compatible with the booru mass uploader userscript (https://github.com/Seedmanc/Booru-mass-uploader), you just need to add //@include ourobooru.art to the top of the userscript. Forward slashes were changed to backslashes cause the new backend didn't seem to support them when I was testing tags. I wasn't able to get the userscript working myself, I don't think it likes Linux. I can't into torrenting so you'll have to deal with 16 zip downloads, sorry, you can download with aria2c and presumably jdownloader as well, you just need to right click the download button and copy the cdn link it gives. Total size: 16Gb https://www.klgrth.io/paste/h44ny
>>18960 Unfortunately, I've closed and opened Hydrus over 10 times since my goof. For context once that thread gets deleted, I'm trying to figure out if I can open a page that's similar to just keeping a tab with all of your queries. Something that lists all of your creator tags, possibly sorted by booru. I'd been using Hydrus just keeping my original search tab open for the longest time, and closed it accidentally. I'm trying to recreate that view.
>>18980 Your best bet to restoring a tab is Pages / Sessions.
(1.16 MB 2048x1284 christmas-jesus-birthday.jpg)

(2.76 MB 1648x1788 kDWIk89.gif)

>>18978 >𝕸𝖊𝖗𝖗𝖞 𝕮𝖍𝖗𝖎𝖘𝖙𝖒𝖆𝖘! Merry Christmas anon.
Hey, I'm a new user and mainly use this as a Danbooru mirror. Once I've added an image to hydrus and the tags change on Danbooru, is there an easy way of updating the tags on hydrus? I have the companion app and sending the image again doesn't seem to do anything. Thanks.
>>18974 >I think the real answer here is for me to write a proper 'this file was downloaded by x sub' metadata type and consult that That would be a great idea, but I'd slightly change that to "this file was touched by x sub", since it's more important to know every subscription that the file passed through (and would have downloaded it), rather than which subscription was the single 1 that actually did the downloading. >I'm also thinking about expanding the statistics I offer and generalising the search domain to any file search. Then you'd be able to look up how 'good' a creator: tag was, or system:was downloaded by x sub. Both of these features also sound really cool. I'd love to see something like that in the future!
Is it possible or does anyone here managed their doujinshi entirely in Hydrus? How would you even begin to do that (page Numbers etc)
>>18986 Assuming everything is properly tagged with creator, title, chapter and page tagsyou can make it work in Hydrus using search collections. It's a bit jank though. If you have a large collection of manga/doujin you will likely be better off with something tailored specifically for that at least for now. Manga and comics something that gets asked about a lot and I believe I've seen HydrusDev say they will eventually improve things in that regard, but who knows when that will be. Ofc if you have the storage space there's nothing stopping you from using both Hydrus and something else at the same time.
Is there a way to limit the amount of tags that come up for autocomplete so my shit doesn't get read locked on certain words? I swear I seen this as an option but I can't find it anywhere.
>>18986 Pages are a pain, and most manga viewing programs I've tried want you to feed it something like a zip file anyways, so the viewing program will take care of pagination. If you just keep all your digital manga and doujins in zip files, then feed the zips to hydrus, managing manga and doujinshi ought to be simple, no?
>>18986 Use Lanraragi. Use applications specialized for the use case. Is there a way to run separate repositories? As in, I want a separate repository for my porn and my normal stuff in Hydrus. Or some other way to separate them completely.
(2.91 MB 1080x2400 man gets cock-eyed.png)

>>18990 I just tag everything with the appropriate tags for lewd subject matter and erotic appeal. Everything that's porn or close to it would be tagged as erotic and either tagged as lewd or explicit. When I'm regularly accessing my lewds, I keep a page open with the erotic tag already searched. I used to think simply tagging as "safe", "lewd", and "explicit" would be enough, but there's plenty of things that are explicit but not erotic to me, so I had to add three more for erotic, semi-erotic, and non-erotic. I can understand the want to separate your lewds. Image is an example of something very explicit but totally non-erotic. Open spoiler at your own discretion.
>>18990 multiple file services just have one for non-porn and one for porn, that's what I do only problem is rating services carry over to both so now I have a rudimentary nut counter looming ominously in the upper right corner when I'm looking at my normal images
>>>/hydrus/18972 Thank you anon.
>>18988 I think you're thinking of the do-not-autocomplete character threshold under tags>manage tag display and search
>>18994 that's the one, thx
>>18976 can you add archive/delete filtering mechanism but for tagging/rating? for example, I have a bunch of wallpapers I need to go through and I want to tag them as "good" vs "bad" without deleting so I can go through the bad ones and upscale them using AI. being able to use the true/false feature for tagging would be super useful (like seeing A + B tags for positive and C + D + F tags for negative)
(489.76 KB 1200x800 e72.jpg)

>>18978 >If you are into clever tag searching, check out the new 'unnamespaced input gives (any namespace) wildcard results' option under services->manage tag display and search. This is a first attempt to bring back the old 'unnamespaced search tags search namespaced variants' tech we used to have Thank you so much OP.
Is there a way to pause all network jobs without just blocking Hydrus entirely?
Can Hydrus display thumbnails for the first file in a 7z archive? I've got a whole bunch of large file sets from Eltonel that would take up a lot more space if I uncompressed them and I'll be putting them in Hydrus eventually. I've gotten quite used to them having thumbnails.
>>18998 Not being dismissive, genuine question, have you seen the pause menu under the network menu? I think that's what you're looking for.
>>19000 I haven't. I had a crash, and now I've got a lot of subscription results that are kind of in limbo, and I need to clear out some pages before I can handle that. If I start it up and look through the menus, I'd either end up with having to sift through a bunch of new downloads, or have to deal with a shitload of domain errors later.
(622.63 KB 1280x720 magical dubz.png)

Check 'em! Thanks to the power of Hydrus. I can pull up all my dubs images at lightning speed. It's fucken amazing.
Often times when I put in a bad/wrong tag in the tag manager and I try to double click it to remove it, I accidently get rid of tag adjacent to it. I don't know of any way to undo this unless the tag was in the recently used list. I often don't notice what accidentally removed and have to stare at the image for a bit to figure out what I'm missing. Is there any kind of undo function for this common mistake of mine?
Made a script to get the m3u8 link from nitter videos. It still needs to be fed to yt-dlp but it's less of a pain in the ass. ''' base = 'https://nitter.lacontrevoie.fr' import requests URL = base+"/JumperJraws/status/1606001789127389186#m" resp = requests.get(URL, cookies={'hlsPlayback':'on'}) print('<video' in resp.text) from bs4 import BeautifulSoup s = BeautifulSoup(resp.text,'lxml') t = s.find(id="m") print(base+t.video['data-url']) ''' def getBase(url): if '://' in url: u = url.split('/',4)[:3] return u[0]+'//'+u[2] else: return link.split('/',2)[0] import sys vids = [] with open(sys.argv[1]) as links: for link in links: base = getBase(link) resp = requests.get(link, cookies={'hlsPlayback':'on'}) s = BeautifulSoup(resp.text,'lxml') t = s.find(id="m") vid = base+t.video['data-url'] vids += [vid] with open(sys.argv[2],'w+') as out: for v in vids: out.write(v+'\n') >>18986 I do, it sucks though. zipping isn't really an option for me because I like stumbling upon individual pages when searching. It'd be nice if there was an option to have specific searches have specific sorts by default, like "sort by page if the search is just a title:".
Why is it that when I enter my first tag the results are always practically instant, but adding the next tag makes the autocomplete, which I must wait on before it allows me to finish entering my tag, take far longer? Shouldn't it pull up results faster once the total pool of potential tags has been heavily narrowed by my first tag? It seems to work that way and speed up once the pool of files is narrowed even further. What's up with this search algorithm? Still on version 493.
(44.93 KB 600x600 quads.jpg)

(74.85 KB 683x1024 wat.jpg)

>>19006 Ahem. Those are quints.
When searching for potential duplicates every now and then, there's no need to recheck files that already have been found not to be duplicates at a certain distance, right? Does Hydrus store "not duplicate at X distance and below" relationships between files to speed up the duplicate search process upon future searches, or is this too much extra file relationship data that would bloat things more than it helps?
Damn I was gonna request the ability to add custom namespace colors but just checked before posting to see its already implemented. When did that happen? I seem to have completely missed it.
In the tag manager the autocomplete list includes parent rows, which can be hidden, but this setting doesn't stick like hiding parent rows for the file tag list does. The autocomplete window is very small and a bunch of things taking up double, or even triple or quadruple, rows makes looking at it sometimes very cumbersome. Has this been fixed since version 493?
>>19010 OP should be gone till next week. Perhaps you may fetch the info by reading the release notes: https://github.com/hydrusnetwork/hydrus/releases
>>19011 Ah. I should have checked here first. It's right there under version 497. I need to make time to update but every time I have free time I can't help but keep tagging.
(118.38 KB 1452x443 bestdev.png)

>>18978 > If you have a large monitor and UI scale >100%, your thumbnails have looked bad for a while. Please check the new 'Thumbnail UI scale supersampling %' option under options->thumbnails. Set it to your monitor's UI scale, and your thumbnails should regen to be crisp. Let me know how it goes! Damn, I really didn't expect the thumbnail thing I mentioned in the last thread to get done so quickly, the tooltip even tries to guess what the correct scale should be. Thank you so much, my waifus now look super crisp again! Hydrus dev really is best dev.
>>19003 If you press the gear and check "show deleted" it will show the tags that have been deleted from the image with an X beside the count. >>18990 Use the -d / --db_dir launch argument to lead to separate repositories. https://hydrusnetwork.github.io/hydrus/launch_arguments.html
(1.58 MB 1280x1280 p.png)

Hi, would it be possible for you to implement the option to export tags as a single comma separated line? And also the option to not include the namespace when exporting. The txt file should look like this: >1girl, solo, long hair, patchouli knowledge, some guy instead of having one tag per line: <1girl <solo <character:patchouli knowledge <creator:some guy This is because I am training hypernetworks with stable diffusion and I need text files to be in this specific format. Right now I am using a script to convert the text file to the correct format but this is very inconvenient. Implementing this feature directly into Hydrus would save me and other SD users a lot of trouble. Thank you!
Apologies if this was answered before, but is Newgrounds subscriptions working? If I try to add someone by username, it says it couldn't find it, specifically: >The parser found nothing in the document! And the pop-up in the bottom right asks to make sure there were no typos: >The query "..." for subscription "..." did not find any files on its first sync! Could the query text have a typo, like a missing underscore? Though if I copy the URL that's in the log that it ignored/found nothing at, it goes straight to the Newgrounds user's art page. There's a lot of "restricted content" on the page that requires a login normally, but I couldn't find any way to add a Newgrounds login to Hydrus the way you add a DeviantArt login. I'm currently on v504.
>>19016 I haven't had luck with it either, all but 1 subscription I added was seen as dead immediately.
Hey, I am back from my vacation and now catching up. I had a really good break and am looking forward to getting back into it 2023. >>18983 Unfortunately not. The technical problem of 'what tags should this file have' is surprisingly difficult to solve efficiently. I wrote the Hydrus Tag Repository (i.e. PTR) for precisely this reason. Outside of the PTR, hydrus tends to get tags while it is doing other things (like downloading a file). There are some technical ways to do explicit file-tag lookups, especially if hydrus knows the URLs that a file is at, but there are difficulties there too, mostly in how it is done. For instance, if we say you have 300,000 files and you want to check them for new tags every month, that means 10k a day, or one every 8-9 seconds 24 hours a day. If we had 500 hydrus users doing that, it could really hamper a server. It would also be a waste, since I'd guess a high nineties-percent of those files wouldn't have any new tags since the last check anyway. So, I'm planning two things here: 1) Improve my tag lookup system. At the moment, there's some bullshit advanced users can do with a thing called 'file lookup scripts' in the manage tags dialog, but it is old and I hate it. I want to update it to use the same tech as my downloader. 2) Write a foolproof maintenance pipeline that allows people to track/search for files that haven't had a 'tag update' in x months/years and queue them up to check sites for new tags in a reasonably paced, sane way that doesn't accidentally do a DDOS or just waste a bunch of time, bandwidth, and CPU. I don't know exactly when this will happen. If you want to force just a handful of files to check for tags, check the 'force page fetch' options in manage tag import options. Never set these options as your tag import defaults, only ever set them on a separate download page with custom tag import options set to do this 'force' stuff. These options save a lot of time and bandwidth normally.
>>18984 Yeah I agree on 'touched'. Since there is so much retroactive data we missed here, I think I'll see if I can add an option to fake the history by saying if the file has known url in the right type and a subtag matching a subscription's query text, we'll say it was touched by that sub. Might need more thought. >>18996 Yeah, I am wanting this myself more and more these days. I just want to quickly say 'left-click = do x, right-click = do y, go'. I had this a million years ago, but it sucked and eventually morphed into the custom shortcuts system. I really need to rework the current archive/delete into a more generalised system and 'action algebra' and then let you customise the actions. It would also be cool to have multi-action shortcuts, like 'add tag and move to next', which is not a dissimilar thing. I'll be thinking about this more. >>18999 I'd like to do this when I eventually do the 'cbr/cbz' support expansion. I can never promise really nice comic book support with bookmarks and stuff, but I think I can write a module to inspect basic archives and do nice thumbs and basic in-archive navigation. >>19002 (checked) I'm glad I can help! >>19003 This doesn't help much, but the tag manager when launched from thumbnails has apply/cancel, and is completely undoable via that cancel, while the one launched from the media viewer makes changes immediately and has no undo at all. I've got a job floating around to allow both in either place. The math behind these dialogs is very complicated, so adding nice undo is not simple, and hydrus has shit undo support generally, but I'd like to, one day, do a proper push on all this. I hate making mistakes like this too. (usually I hit 'remove' on some thumbs I actually wanted to do more with)
>>19019 >The math behind these dialogs is very complicated, so adding nice undo is not simple Anon already gave a good solution. >>19014 >If you press the gear and check "show deleted" it will show the tags that have been deleted from the image with an X beside the count. This seems to also show some potentially older deleted tags, but is plenty good enough for the purpose of easily restoring an accidental tag removal without any complicated math.
>>19005 Is this in the normal file search? Please forgive it for now--there is some stupid math that makes it so it gives accurate tag counts once files are loaded (i.e. it counts the tags that are currently in view), and to do that it needs to do some currently inefficient sibling matching with a big database hit. I've several ideas on how to make this work as fast as a normal lookup while still having good counts, and I regret how slow it can get. I'll make a job to remind myself to put some time into this. This is really ugly, but if there is a system:limit in the search, or 'searching immediately' is off, then it does the normal fast search with db-wide tag counts. >>19008 I won't get too technical, but the search doesn't quite work this way. I don't discover if files don't seem similar, but rather if they do. Also, the list of every pair that doesn't match at a certain distance would be too huge to store (I think it would be combinations, right, which is ((n * n-1) / 2) or so I think). I do know there is some caching in the system, and if you have done searches at smaller distance, it can skip some work in future searches at longer distance, and if you have done a search for a file at a longer distance, it knows it never needs to search that file at smaller, but it is not perfect at all. My main concern with the dupe search at the moment is some high latency access and not enough caching just when searching the big tree. It functions correctly, but I need to reshape it so I'm not hitting SQLite so much. Just raw code work I need to do. If you are interested, the main similar files search tree is a VP Tree: https://en.wikipedia.org/wiki/Vantage-point_tree It searches the bit-counted 'hamming distance' between 64-bit DCT-based 'perceptual hash' nodes. https://apiumhub.com/tech-blog-barcelona/introduction-perceptual-hashes-measuring-similarity/ >>19009 Ha ha, about ten years ago, I'm afraid. The widget I made to edit them has sucked until now though! >>19013 Glad it is working better. You prompted me, and another user mentioned something, and I realised it should actually be easy math. It was ok in the end! Another user told me after release last week a simple thing I can set to make your 200% thumbs look good even on a monitor at not 200% too, so we should have crisp thumbs on multi-monitor/UI-scale setups too.
>>19012 >I need to make time to update but every time I have free time I can't help but keep tagging. based I don't remember doing this stuff in 497, so let me know if it is still broken or not doing the right thing in newer versions. I just corrected a bunch of taglist display last week, too. >>19015 Sure. I will expand my current sidecar system to let you set the separator character on .txt imports/exports. Default will be newline, but you could switch it to ',' or whatever works. >>19016 >>19017 Nah, sorry, I think I looked at it earlier this year and the whole site had changed. I should remove it from the hydrus defaults. I just looked on the repo, and there's this, but I have no idea if it works or what: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Newgrounds
>>19021 >This is really ugly, but if there is a system:limit in the search, or 'searching immediately' is off, then it does the normal fast search with db-wide tag counts. Thanks. I don't want limit results heavily, so turning off "searching immediately" seems to be the best option for now. Speeds things up greatly. I should note I don't have an SSD. >>19022 >I don't remember doing this stuff in 497, so let me know if it is still broken or not doing the right thing in newer versions. I updated. It was faster and easier than expected. Ignored the spooky message about bitrot and jumped 18 versions at once. Tag manager autocomplete looks nice and clean now. Much faster to select what I need.
>>19022 I found the external NG downloader you linked iffy, but hydownloader works well with NG pages. In particular one artist has a lot of works which starts getting 403s and the external downloader couldn't seem to get past them. I think it's because hydownloader can resume from where it left off without rechecking the beginning files?
Did all the Nitter downloaders break for someone else or is it just me?
How do you pull up all the files you have with pending taggings? I try a search with "exclude current tags" on but when try to search "system:archive" with it the whole archive comes up.
I try to keep the images I post fresh and not post the same things over and over. For now, random sort seems good enough for this, but ideally I'd like to make sure I'm using my least used images both for freshness and dissemination. Would it be possible to add a "sort by preview views" option? When I grab images to post I'm often not opening them in anything but the preview window, so it doesn't count towards the regular media views count in the media viewer. Hydrus already keeps track of preview window views, but I can only sort by media viewer views.
>>19027 Alternatively and even more ideally, but this may require a lot more work, there could be sorting by export count.
is there a way to make the blue text in the duplicate filter decision box a lighter shade when you're using dark mode? This isn't very readable on my monitor.
I had a great week back at work. I cleared a variety of small jobs and spent some extra time on the duplicates system. Several duplicates search bugs are fixed, and you can now set a potential pairs search for 'one file is in search A, the other is in search B'. The release should be as normal tomorrow.
>>19030 >you can now set a potential pairs search for 'one file is in search A, the other is in search B'. If that means what I think it means, amazing! I've been waiting so long for that feature!
>>19030 also welcome back hope things were nice on your break
Hi. I think I asked this a year or two ago, but it looks like it's gotten lost. Is there a way you can see the URL's of files that you have deleted, so that you can download them once again? I think I accidentally deleted some files a while back. I've deleted about 2000 over the last month or two, so it shouldn't be a big deal to redownload if I can just find a record of the files I've deleted. Thanks!
https://www.youtube.com/watch?v=jkYg2eV3xBc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v512/Hydrus.Network.512.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v512/Hydrus.Network.512.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v512/Hydrus.Network.512.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v512/Hydrus.Network.512.-.Linux.-.Executable.tar.gz I had a great week back at work. There's a mix of different things, but particularly duplicate search improvements. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html duplicates You can now set a search in the duplicates processing page for 'one file comes from search A, the other comes from search B'! It is now super easy to, say, find png vs jpeg pixel duplicates. I also improved the overall accuracy and precision of the duplicate search logic. When I looked closer at this code, I wasn't always getting pixel duplicates or search distance correct when actually fetching results to show you, or I was selecting bad representatives of larger groups in the final step. This was particularly true in 'show some random potential pairs', which was showing too many files, often outside the search scope. There's some other bells and whistles that are subtle but should make the results feel more 'human'--for instance, in 'show some random potential pairs', if you are set to 'at least one file matches the search', then the first of the random group will definitely match the search, and if you are set to 'both files match different searches', then the first file matches the first and the others match the second. I have made several changes here, sometimes significantly increasing the CPU load. In testing, things generally seemed to be running at ok speed, but let me know if your IRL complicated situation gets laggy. Also let me know if you discover any miscounts. misc When you select thumbnails or right-click on them, the labels in the menu and status bar are a bit nicer--it can now say 'x images' or 'x mp4s' instead of the flat 'x files' in more places, and any time you have a selection of files with duration, it will say the total duration. v511's supersampled thumbnails now look good on any UI scale. If you have a multi-UI-scale setup, on multiple monitors or not, you should now set the new setting (under options->thumbnails) to your maximum UI scale and hydrus should look good everywhere! If you want to import or export a .txt sidecar with a separator character other than newline (e.g. comma, for CSV format), you can now do this! You can set any separator characters you like. The Client API can now refresh pages. next week Next week will be a medium-sized job week. I want to spend extra time adding the duplicates system to the Client API.
>Hide parent and sibling decorators Interesting.
>>>/hydrus/18949 >>>/hydrus/18973 I decided to try moving to source instead of updating with the release builds. Still no IME. I'm ignorant about the PySide and PyQt stuff, so I left it default, and the Help>About Qt window says it's using Qt6.
>>19036 Oh, hold on. I think I misunderstood. The general "about" says PySide6, so I guess the Qt6 part (rather than PyQt6) isn't what I need to be looking for. Like I said, ignorant. I guess I'll need to give the PyQt6 approach a try next.
Is there a list of banned PTR tags? My accoun t got banned (third time!) because of tags someone else made that got merged over while processing my giant duplicates backlog. Specifically the tag was "hatate:not found". I've added it to my duplicates exclusions as well as "hatate:found" because I assume it exists and would be banned as well. More work for the janitors when I send about a million tags at once instead of 20k I guess. But a list of banned PTR tags to go ahead and add to my duplicates exclusions would go a long way in making me not rage quit the PTR.
(15.59 KB 370x252 153817.png)

How to fix this?
>>19038 I would suggest you only merge local tags when processing duplicates
>>19040 I have almost no local tags. Virtually everything I tag goes to the PTR. Whether I tagged by hand, duplicate processing, whatever, it all goes to the PTR. If I merged only local tags all I'd end up with is thousands of images with no tags while deleting the files that do have tags. I've submitted literally millions of tags to the PTR. It's just that occasionally some asshat in the past tagged some garbage and it got carried over duplicate processing.Doesn't help that the janitor have no chill. It's clearly possible for the janitors to send messages as a client update response, cause that's how I know what the problem tag is. They could just send a warning of "hey, this tag is trash, don't submit it" before sending a ban.
>>19039 explain what you want fixed >>19033 if you change the search area from "my files" to "all known files" it will show results for deleted files as well.
>>19042 Thanks! That worked!
How do you all feel about adding the series a character is from as a parent to their character tag? I mean, it makes perfect sense since the character should virtually always be associated with their source material (only exception I can readily think of would be Krystal from since their is also her Dinosaur Planet incarnation). There's something about the practice in general I'm not a real fan of, but I can really only point to obscure fringe cases like the above as a real argument against it. I've just noticed it's done very inconsistently around the ptr, but of course that applies to a ton of shit.
>>19044 I do it all the time for my personal collection and do not interact with the PTR. If two incarnations of a character exist attached to different IPs, I make two different character tags for disambiguation since these incarnations are generally not exactly the same. I've yet to encounter anything where they are.
>>19045 Makes sense, the multiple character tag is probably the best way to manage it really
(142.48 KB 1002x737 Screenshot_20230105_192134.png)

(375.31 KB 1349x695 aedd3d70.png)

I just discovered that Hydrus is downloading the text of Twitter's videos as a NOTE. Sweet!!! Thanks OP.
since it seems like you're working on duplicate stuff now, do you have a rough idea of when you'll be getting around to adding a way to set rules to auto resolve pixel-for-pixel dupes? Since the only things that matter when filtering pixel-for-pixel dupes is information that hydrus is aware of (like filetype, known url, tags, etc.) it'd be great to just set rules for different cases so that hydrus can automatically resolve them. Then that entire class of duplicates will just never need to be seen again. I run into a lot of pixel-for-pixel duplicates, so having custom auto-resolve rules would make going through the filter much quicker and less error prone.
Is there any way to "collect by" alternate images? Like if I select a bunch of thumbnails and go "file relationships -> set all selected as alternates", can I then choose to display them all as a single thumbnail with an image count, like with collections?
PTR question, why do some parents not show in the manage tag dialog but do show when you go to manage tag parent dialog?
>>19047 I'd love it if the same could happen with pixiv posts. Especially since artists there have a tendency of nuking their accounts. Since hydrus currently saves the titles as a tag and the tags as tags (obviously), but the descriptions are lost. Often, the descriptions are important to understanding the art too. At least with the artists I follow
>>19051 >Often, the descriptions are important to understanding the art too I know Kyo-yan gives lots of context in the descriptions. Now and then there'll also be a transcript of the Japanese text within the image, which is much easier to copypaste into a translator.
>>19049 I wish, that would solve so many problems
>>19024 Thanks. Yeah, hydownloader is great. I think clever external solutions are going to be the way forward for all sorts of headaches. I'll be updating the Client API so these tools can do more and more in future. >>19025 I'm getting 403s (forbidden), even on direct image links, so maybe they started blocking hydrus clients, or there's a CDN like cloudflare doing similar? Not sure. It might not work for your situation, but the twitter downloader works good these days. >>19026 Add a tag search predicate. For your purposes, 'system:number of tags (has tags)'. Should be super fast on just the pending tag domain. >>19027 >>19028 Sure, I can do the preview views stuff. I'm glad someone has use for that number! I don't track export count, but I will at some point be making a new rating that is a 'click to increment' number count. That might work for your purposes.
>>19029 If you feel brave, make a new qss file in install_dir/static/qss (duplicate your current darkmode style .qss) and then edit the 'QLabel#HydrusIndeterminate' class's colour. Obviously then set your new qss in options->style. >>19032 Yeah, thanks. I had a great time off and was able to reset some IRL things. Doing good now, looking forward to 2023. >>19033 >>19042 >>19043 BTW, if you turn on help->advanced mode, the file domain selector here lets you select actual 'deleted from xxx' file domains from the 'multiple locations' menu entry. Hydrus now supports specific deleted file search with correct tag counts and stuff. >>19036 >>19037 Check the running from source page if you need help. I think to get PyQt6 you'll have to edit your venv manually since I don't have that in the setup script, but it talks about it all there. Just get into the venv in your terminal and uninstall pyside6 and install pyqt6, you should be good. Let me know how it goes! >>19038 >>19041 Sorry for the trouble. The 'tag filter' button under services->review services is going to get more in it in future. That'll eventually be the pseudo-official blacklist since it is baked into the client now--you can't upload anything in there to the server now. I need to do more work to make that filter easier for the janitors to work with, but I want this stuff fixed and easier for everyone to work with. >>19039 If you want to set different parents or siblings, check out tags->manage where tag siblings and parents apply. If you prefer '1girl', then set '1 female' -> '1girl' on your 'my tags' in manage siblings and then set your 'my tags' siblings to apply to the PTR's tags with greater precedence. Your local preferences will then override the PTR's siblings. If you are worried about the weird grey autocomplete input, try hitting 'insert' key. That's an old IME-compatibility shortcut/mode.
>>19054 >Sure, I can do the preview views stuff. I'm glad someone has use for that number! Fucking tops, mate.
>>19044 Yeah I think character->series is the best parent case. harry potter is always part of harry potter, mercy is always part of overwatch, etc... There's some edge cases I'm sure, and some situations where you should have an umbrella like 'dc comics' instead of 'series:wonderwoman' or whatever when characters launched in multiple places at once, but most situations are clear. Main use for this is it just saves time tagging and searching, which is the best type of parent imo! >>19047 >>19051 >>19052 Yeah, I am very slowly integrating this tech. I'm not really happy with 'note import options', but it is fine for most cases, and we got notes merging in dupes now. I'll see if I can figure out pixiv note parsing! >>19048 Now we've got two-search tech working, we are an important step closer. Now I have to write the maintenance system that will run this and all the UI to work with it. I won't make the mistake of predicting when it will happen, but it is super on my mind now. One thing I want to do beforehand is rework the fucking awful hardcoded scoring system in options->duplicates. That needs to be dynamic and user-customisable lego algebra stuff, and then I can plug that nice system into the new maintenance system and start off on the right foot. >>19049 >>19053 Not yet, but some day it will happen. Thumbnails don't 'know' their dupes yet, it all has to be database fetched, but I'll write the load and update pipeline and we should be able to get grouping and updating, and some sorting too. >>19050 Might be because you have complicated setup in tags->manage where siblings/parents apply, or it might be because you aren't fully synced under tags->sibling/parent sync. If you only apply PTR tags and are fully synced, let me know!
>>19051 Here you go.
What is the threshold for boning? It's not really relevant to me, since I only import individual folders no larger than a few hundred files at most as I chip away at the beast. I will always be perceived as unboned, even though I estimate I'll be tagging heavily for another 6-9 months.
Are there plans to one day add CAPS sensitivity purely for the "filename:" namespace tags? I know caps sensitivity is pretty much useless everywhere else except for in filenames, so would it be possible to make it only apply to a specific namespace?
>>11192 Had the exact same issue, but it's back for me now.
>>19059 I didn't know you could be unboned.
>>19062 >>19059 What does being "boned" or "unboned" even mean in this context?
>>19063 Having a fuckhuge amount of unarchived files in need of tagging then archiving.
>>19058 Thanks this really helps! Except that there seems to be no way to import notes for files that are already in db. I tested it on new files and it works, but even with "force page fetch" turned on, it doesn't fetch notes. Is this what hydrus dev meant in >>19057 when he said that he's not happy with note import options? In any case, adding an option to force grabbing of notes for files that are already in db would be great, but for now I guess I just gotta wait before I can fill out the note backlog.
(226.09 KB 1915x1948 Untitled.png)

>>19065 >Except that there seems to be no way to import notes for files that are already in db. I tested it on new files and it works, but even with "force page fetch" turned on, it doesn't fetch notes. Works on my machine. I don't have force fetch on. I'm on hydrus 505. Are you just pressing "try again" on a url already in a downloader page file log, or are you properly redownloading? Here's a pic showing what I mean.
>>19066 I think I figured out the issue. The files I was using to test the note grabbing were files that I previously manually added the pixiv descriptions to. It looks like if a note that's already present on the file has identical text to the one that's about to be added, then even if the notes have different names, Hydrus doesn't add the new note. I removed 1 character from the manual description I added, then ran the download again and this time it added the note, so I think that's what's happening. That's actually a pretty neat feature, but I wish there was a toggle for it because I didn't know about that so it was causing confusion.
>>19055 >If you want to set different parents or siblings... Sorry if I wasn't being clear, the issue is I type "1 female" and press enter and it adds "1girl" when 1. It knows that is not a good tag 2. I typed exactly the tag I wanted 3. "1girl" doesn't even match past the first letter Just because "1girl" has a higher tag count (because I import a lot from rule34.xxx)
(4.60 KB 512x108 kemono description.png)

>>19058 >>19066 by the way. thanks to you making that parser, I was able to use it as a reference and I managed to make a note parser for kemono as well.
Is there a way to have your tag services inherit the sibling / parent relationships from the PTR?
I had a good week doing one thing. The Client API now supports the main duplicates commands, fetching current file relationships and the main filter workflows, and then setting basic file relationships back again. This is a complicated feature for users who play around with the latest Client API tech, so tomorrow's release is for those advanced users only! The release should be as normal tomorrow.
>>19070 tags > manage where tag siblings and parents apply
>>19069 this parser actually has a bug. If there's any "downloads" urls in the post, those are actually considered part of the content and the description and those links get wrapped in html (still as a string though), and that's what the note parser gets back. I don't know how to fix it though because I don't yet know how to make a parser that parses json, then grabs a string and parses that string as html. I tried to use a compound formula, but when I add the html parser, it parses the original page again instead of the html in the json that the json formula ended up with.
>>19073 I also just realized that parsing the html in the json would have to be conditional, because if I only grab the text inside the "p" tag, then whenever there's no download urls as part of the content and the "content" is just a string instead of html, the parser won't grab the anything, when I want it to just return the string in that case. I don't know how to do conditional checks like that either. Well at least the way the parser is now, it still grabs everything you want. it just also grabs a bunch of other stuff you don't want and looks kinda messy. I gotta figure this out some time.
Is there a way to check an import folder remotely?
https://www.youtube.com/watch?v=D5T9RByz9i8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v513/Hydrus.Network.513.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v513/Hydrus.Network.513.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v513/Hydrus.Network.513.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v513/Hydrus.Network.513.-.Linux.-.Executable.tar.gz I had a good week focusing on adding the duplicates system to the Client API. There is nothing else, so today's release is only for advanced users. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html duplicates with Client API The main duplicate commands are now available. Obviously check the changelog and Client API help out for full explanation, but the general gist is 1) There is a new 'manage file relationships' permission to gain access, and 2) You can fetch and create relationships, but you can't delete them yet. I will add these commands in the future. https://hydrusnetwork.github.io/hydrus/developer_api.html#managing_file_relationships https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced There's some new enums to grapple with, and you can be dealing with two searches at once when fetching potential pairs. Please have a read, try things out, and let me know where it goes wrong. I expose the ugly technical side of things, so if it isn't clear what a king is, I may need to brush up the duplicates help etc... I expect to keep working on this in the near future, including adding more search tech as I do related 'mulitple local file services' expansions to the Client API, and in the more distant future as I expand alternates. While I was in here this week, I realised I could brush up and clarify some of the edge-case logic too, as I wasn't sure myself on every situation. Again, let me know what seems confusing or wrong for you, and we'll iterate. next week A mix of small jobs, and it would be nice to add some remove/dissolve commands for this new duplicates Client API. On a side note, I migrated to a new dev machine this week, upgrading from a Minis Forum X35G to a TH80. I'm pretty sure the old machine was a 2-core CPU while the new is 8, ha ha ha, so my working session feels super smooth now. I recommend these little machines if you need a decent office computer (the old one also drove two 4k@60Hz monitors no problem), or a younger family member's first big boy computer.
FUCK NITTER
does someone have a parser for zzzchan threads or for jschan image boards in general?
>>19025 >>19054 >>19077 I use the nitter.1d4.us instance and that one still works fine for me. the reference instance was always too slow anyway.
(537.84 KB 1920x911 123.png)

Any idea why these red trashed files show up in this page? I permanently deleted all files in trash and reloaded hydrus, so it doesn't make sense that these still show up. Interestingly, opening a new page and doing this exact query doesn't return these red trashed files. This is a bit of a problem for me since I have a lot of pages with these red trashed files, and eventually my pages seem to end up with these in their results no matter what even after creating new pages.
>>19080 those red files are non-local files, which are files that are no longer in your client at all. emptying out the trash is exactly why the red files are all there. They're deleted so hydrus can't actually show the files themselves anymore. They still show up the pages you had open because they were still part of that page when you deleted the files from the trash, so hydrus has to show something there.
(354.21 KB 1528x1111 robotnik's promotion.png)

Hey hydev, would it be possible to institute a system:has transparency predicate or something similar? I don't know if Hydrus "knows" that about files, but I think it would be useful.
>>19083 Sure that would be too useful. I tag all my files that have visible transparency, but if Hydrus went and grabbed everything with a transparency layer, you'd probably get a ton of things that don't actually use the transparency layer in the results.
>>19084 >Sure that would be too useful. Not sure*
Has anyone made a better DeviantArt post parser? I noticed that the DeviantArt backend json actually doesn't provide the best resolution the image can have. For example, take this image: https://www.deviantart.com/jaykuma/art/Soft-Service-905910423 If you go to the backend json: https://backend.deviantart.com/oembed?format=json&url=https://www.deviantart.com/jaykuma/art/Soft-Service-905910423 the best size is 997x802 (so-called "preview" size). However, I noticed that in the html for the normal image page (https://www.deviantart.com/jaykuma/art/Soft-Service-905910423), there's some json hidden at the bottom in a <script> tag. There's a part that says "window.INITIAL_STATE = JSON.parse(" and then a big bunch of json. If you look in this json, you'll find there's a 1600x1287 size of the image (so-called "fullview" size). Has anyone made a parser for this yet? If not, I will. (I'm assuming there's no way to get the actual full 4260x3426 image.)
>>19086 I decided to make it anyway. I found out something interesting about deviantart resizes: turns out they don't have to be jpegs! Even though deviantart always provides these urls as jpgs, you can simply change "jpg" to "png", and get a resize without shitty jpeg compression! So basically, most files I've downloaded from deviantart have jpeg compression for no reason and I need to force redownload them all. Great. Anyway, this parser will go through the json at the end of the deviantart file page html. Not all images have their original size available (particularly NSFW images), so it checks multiple possibilities. The highest priority is the original size, then the "fullview" resize, then the "preview" resize. I haven't seen any deviantart posts that don't at least have the "preview" resize, so I didn't make any checks for smaller resizes. If it's downloading a "fullview"/"preview" resize, it will change the "jpg" at the end to a "png", so your resizes at least won't be covered in jpeg compression. I considered adding a note parser to get the description of the post, unfortunately deviantart descriptions sometimes have important html in them. If the parser gets the description from the json, it's full of html tags and is difficult to read. If the parser gets the description directly from the html of the page, links will be shortened with ellipses and icons that lead to other user pages will be absent entirely. So I didn't bother. Maybe it should just get both?
>>19087 Fixed the description
>>19088 Oh yeah, and a warning to those who attempt to investigate the parser: The json hidden in the html is inside a script tag that is quite huge. Upwards of 200000 characters long. When I first tried to open the "string processing" menu in the subsidiary page parser that creates the json from the html, hydrus ate all my ram and bloated my pagefile and crashed. In order to test the string processing, I had to first edit the html and remove some of the useless parts from that script tag. So be careful if you fetch test data.
A script to periodically search the catalogs of various chans for a thread title and add matches to a watcher page? Is such a thing even possible?
How do anons tagging personal collections handle male/female tags? I've already tagged a ton of things as "body:" for large body traits like overall shape, extra limbs, tails, et cetera, and more specific namespaces for specific body traits like "hair:" for hair color, style, length and "skin:" for skin color and tans. It would be useful to have these separated by male or female, like "female:long hair", but that would mean sacrificing a bunch of namespaces and making a lot of longer tags. I don't tag specific traits on males most of the time too, so duplicating these tags by sex, killing a bunch of namespaces, and lengthening most of the tags seems like it's not worth it. Thinking about when I eventually do mass downloading of my favorites from the panda, would it be possible to have a parser grab the tags attached to the galleries? Does it require some kind of dummy account to get beyond the account block for panda only galleries, or can I make it use mine somehow by logging in with a regular browser?
>>19088 Update. Realized one of the string matching steps was probably not strict enough.
>>19091 >How do anons tagging personal collections handle male/female tags? You already said it. >I don't tag specific traits on males most of the time too, so duplicating these tags by sex, killing a bunch of namespaces, and lengthening most of the tags seems like it's not worth it. Tags are for searching. When am I ever going to need to search "male:long hair"? 99% of my images are female-focused.
(254.39 KB 864x594 1470251999460.jpg)

>Shift select multiple images >Uh-oh, something without the tag you're about to apply interrupts the chain >Ctrl select the first image after it >Continuing shift selecting >The first selection isn't lost >You can shift select forwards and backwards from the new starting point without effecting the first selection Intuitive as fuck. Amazing.
>>19058 Thanks, this is great--I'll fold this into the defaults! >>19065 This sounds like a bug, thank you for the report. I'll check out what's going on and see what I can do. Retroactive fetch of metadata is a longer topic, but this year or next I want to push on it. It'll be a bunch of work, but mostly behind the scenes maintenance and pipeline stuff, and then the UI to drive it. >>19059 Here if you are interested. https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUIScrolledPanelsReview.py#L2889 Looks like <1% num_inbox or so. Well done for doing it! >>19060 No, not for tags. I honestly think we should have a different metadata type, somewhere between tags and notes, for mid-length soft-identifiers. Filenames are obviously one, and 'title:' tags another. This has been on my mind for years, and I've talked about it a bunch with people, but I don't have firm thoughts yet. Case sensitivity would be appropriate. I'd like to spend proper time on this once notes is properly shaken out. Depending on how you think about it, URLs could be in here too.
>>19066 >>19067 Ah just read this. No matter if the particular logic is working or not here (and it sounds like it isn't), this is obviously unintuitive and I should improve it. This stuff is governed by 'note import options', and it just feels bad to use. I thought I had 'good' defaults set up, but please have a look at yours and see what it is set to, and let me know if you have any good ideas on how it should work. >>19068 Thanks, I missed it. Pretty sure this is a bug, since it is supposed to put exactly what you typed, if there is a direct match, right at the top. I'll look at the logic, it must be being confused by the sibling matches. >>19073 >>19074 Doing JSON-HTML conversions is pretty horrible right now, but I was talking with a guy a couple months ago about this and I do have it in mind to make some new formulae types here. It should be easier to grab html from inside JSON etc... And as I push on better multiline integration in the parsing system, I want nicer formatting options and things here too. Can't promise when it will happen, but since you are in it, have a play around and let me know what simplest tools you'd like most of all. >>19075 Not yet, but that sounds like a good thing for the Client API. I can add a command that you just ping with import folder names to boot them up.
>>19080 >>19081 >opening a new page and doing this exact query doesn't return these red trashed files Check what the 'file domain' of that search page is. In the dropdown that appears when you type tags, there should be two buttons for file and tag domain. Is the 'file' side set to something like 'all known files'? This would explain what you are seeing here, especially as new pages, which will start on 'my files' or similar, are working ok. 'all known files' is a special search domain with complex rules. It basically searches the set tag domain and so can return 'remote' (including deleted) files. 'all local files' is a similarly clever domain, and 'all my files', but they'll only give you files that are on your computer. Some of this stuff you need help->advanced mode turned on to see--if your search page got set to some whack domain outside of advanced mode, or you aren't sure how this all got set, let me know. Also let me know if the search domain is 'my files' and you are still seeing weird remote thumbs. If you are a long time user, I think there are some legacy downloader pages set with weird file domains, so if you open child search pages from them, they can inherit this, but it shouldn't happen to anything opened in the last couple years. >>19083 >>19084 >>19085 I was actually doing some of this work recently when I did some thumbnails stuff. I am interested in whether files have transparency in order to determine if it should have a jpeg (small) or png (alpha-ok) thumbnail file. I have some clever scanners that actually look for very non-interesting transparency data in the images I load (discarding completely opaque alpha channels, for instance), and it takes some CPU to do that, so remembering that data would be useful in a couple of cases. We could also use it for dupe filter comparison stuff. I'll write this down and think about it. I've been casually thinking about animated alpha channels too. gifs obviously can have them, and I'm pretty sure apngs and animated webp, when it becomes technically feasible to render those, can do it too. >>19086 >>19087 >>19088 >>19092 Well done! I'll check this out and see if I can neatly fold your update into the defaults. DA has always been a bit weird about letting you see which is the highest quality image, with all the weird nfsw modes and 'don't have a download button' options they offer artists, so if this can pull a reliable 'full-size', that's great.
>>19090 I know some guys who do this with their own custom scripts and the Client API, but I don't know of any public releases, I'm afraid! Totally possible though. You make a script that hits up the imageboard catalog JSON and then scan for subjects you care about, and then send the URL to hydrus via the Client API. Like boorus, chan-style imageboard engines all have slightly different API formats, but the 4chan one is typical, if you want to look into it yourself: (I think this is the official docs) https://github.com/4chan/4chan-API https://github.com/4chan/4chan-API/blob/master/pages/Catalog.md >>19091 >>19093 Everything is up to you of course, but in any 'how to tag' conversation, I like to say my two rules: 1) Don't try to be perfect, or you'll go crazy. 2) Only tag what you search for. If you aren't searching for extra limbs, don't tag it. If you are searching for 'monster girl' but not 'extra limbs', then use 'monster girl'. If you try to describe an image with tags, you'll go crazy at the work and your slow pace, and you'll never use the work anyway. Tags are for searching. That said, I started hydrus using 'gender:male', and that seems to have stuck on the PTR. I never expected the gender definitional 'controversy', ha ha, we'd see in modern years, but its fine for most work. 'gender:intersex' is a decent 'other' for more complicated situations. I think exhentai/panda uses 'male:long hair' namespace style, right? I'm sorry to say I hate it, and I never built hydrus's logic around this idea, but again you can do what you prefer and/or are used to. In my mind, the subtag is one of the namespace's class. 'metroid' is a 'series'. 'samus aran' is a 'character'. 'long hair' or 'erection' or 'sub' is not a 'male', 'tail' is not a 'body' (might be a 'body part' though, so not so bad). For the 'male:' 'female:' stuff you see, I'd suggest using 'female dom' and then parenting that with 'gender:female' and 'dominant'. There's no technical problem spamming lots of tags in hydrus with parents, and there's no sin in unnamespaced tags. But again, tag what you search, not to describe. If you care about slime girls, start off just tagging 'slime girl' with a shortcut or something, and see how you feel after 3,000 files. >>19094 Glad you like it! That logic is fairly recent (it was shit before, ha ha ha), and I duplicated it to the tag list recently, which also supports live drag select and ctrl+drag deselect with 'undo' memory. I want to do more like this, and finally actually implement a proper drag selection box for thumbs.
>>19095 >Well done for doing it! As I said, I only do small imports at a time. My actual backlog is still over 14,000 files. I am totally boned. But I am now more than halfway to unboning after some mass zip archival of project files. >>19098 >Don't try to be perfect, or you'll go crazy. I'm already crazy. I'm just looking for ways to be more efficiently and effectively crazy. I also really like tagging metadata. Is there a way to see what the average number of tags there is per file? How about looking at what tags I have the most and least of? After I'm done tagging everything, being able to find all the tags that I ended up only having 1-5 of would really help me eliminate a bunch of bad tags I surely made. Additionally, looking up tags by number of uses should include filtering options, so things like "filename:" don't fill up the results for low tag usage. Next to filenames on low usage tags I'd need to filter would be "text:". I often tag key words and phrases present in an image since they're often what I think of when trying to find it, and I've found things this way many times already. I imagine some of what I tag as "text:" really ought to go into notes now that they're more robust, such as large amounts of text, or translated text that is in a foreign language in the image. But I digress. Tag lookup by number of usages of the tag would be useful if not already implemented. >If you aren't searching for extra limbs, don't tag it. I absolutely am. I love multiple sets of arms. Ranni the Witch and Faputa are a boon to me, as this was much rarer before they came along. >I never expected the gender definitional 'controversy', ha ha, we'd see in modern years, but its fine for most work. 'gender:intersex' is a decent 'other' for more complicated situations. I just use "Sex:male, female, futanari, shemale". That covers pretty much all use cases for me. I have yet to even use shemale, but I'm sure it will come up eventually. >I think exhentai/panda uses 'male:long hair' namespace style, right? I'm sorry to say I hate it This why I'd want any parser for the panda to convert the tags to something more palatable, and further, to put everything under the male/female namespaces under a "panda:" namespace, since they have their own tagging rules and how those tags apply to a set of images is different from how they apply to individual images/files. This would prevent issues with overlapping with existing tags that are applied based on different logic. >I'd suggest using 'female dom' That's quickly becoming my solution in specific cases, to separate things like "body:fit male" from "body:fit female" as I started out with just a body:fit tag. However for less specific cases, the appearance of multiple characters of differing sexes, that both warrant tagging of their traits in my opinion, causes problems. I'm just rolling with it and hoping for the best for now. Hasn't interfered with searching yet.
>>19099 >Is there a way to see what the average number of tags there is per file? Nevermind, I'm retarded. Just divide mappings by files. I average 16.26 tags per file. I'm curious. How do other anons' personal collections average?
>>19100 Looks like parents don't count towards total mappings either, so the average number of effective tags per file is higher. It probably dropped a lot when I started making proper use of parents.
>>19099 >and further, to put everything under the male/female namespaces under a "panda:" namespace, since they have their own tagging rules and how those tags apply to a set of images is different from how they apply to individual images/files. this is what multiple tag services are for. when you parse from the panda, tell your download page to put tags in a panda tag service. otherwise you'll end up with tags with two namespaces like panda:male:long hair.
>>19080 >>19097 it was set to "my files" when the red trashed thumbnails were occurring (I guess what you're calling remote thumbnails? I only use hydrus locally though). setting the file domain to "all known files" actually got rid of the remote thumbnails that no longer existed on my client, so problem solved. thank you!
>>19102 Makes sense. I got too accustomed to just my personal one. A separate service from both "My Tags" and "Downloader Tags" would make this simple. Whenever I get there, I'll figure out how to do that.
>>19096 >Doing JSON-HTML conversions is pretty horrible right now, but I was talking with a guy a couple months ago about this and I do have it in mind to make some new formulae types here. It should be easier to grab html from inside JSON etc... I guess that means that the only way to do something like this is very hacky and confusing, so I guess I'll just keep that parser in the semi-broken state until I can fix it properly. I'll just have to clean it up later. But yes I'd definitely appreciate a way to do that! >And as I push on better multiline integration in the parsing system, I want nicer formatting options and things here too. Can't promise when it will happen, but since you are in it, have a play around and let me know what simplest tools you'd like most of all. I'm not sure what exactly you mean here, but if you're talking about formatting for notes that are parsed, it seems to work fine. newlines get represented as newlines and all. It would be cool if notes had formatting like underlining, bold, italics, strike-through, quotes, and all that though.
As it is right now, gallery searches for booru.org boorus are kinda useless since they miss so many posts, but I've been trying to adjust the booru.org parser to be able to grab child posts when doing the gallery searches, but I can't figure out a way to do it. The child posts don't actually show up in gallery pages at all. only parent posts do, but the child posts do show up on the parent posts's page. I tried to get hydrus to parse post pages to look for links to child posts, then follow those links as posts to download, but it wouldn't work at all. I'm not sure how to get hydrus to see other posts on a post page, then follow them as their own separate posts? Does anyone how a clue for how to get it to do this, or some other way to be able to grab child posts when doing a gallery search?
>>>/hydrus/18974 >Can you talk about why you'd like this feature, so I can understand the workflow you are aiming for better? basically, I want to have an ID that I can use to refer to a certain file I have in hydrus, that's specific to my db, so that others who see it won't know which file I'm referring to, but I will. I want to be able to lookup hydrus ids so that I can save the id of a file to a note, then later grab that id and find the file again, but by using the internal ids, I can do so in a way that others can't replicate. also sorry about this reply taking so long. it was the tail end of the last thread so I didn't notice that you ever replied.
>>19107 >basically, I want to have an ID that I can use to refer to a certain file I have in hydrus, that's specific to my db, so that others who see it won't know which file I'm referring to, but I will. I want to be able to lookup hydrus ids so that I can save the id of a file to a note, then later grab that id and find the file again, but by using the internal ids, I can do so in a way that others can't replicate. I'm doing that by hand, one by one. Every file name is sequentially numbered and entered in the Hydrus' DB with its file name as a namespace. For example: "filename:456-whatever" Then the content of a sidecar NOTE named "456-whatever-note.txt" is entered manually in the Hydrus' DB to accompany the respective file. Next the sidecar NOTE is stored into a folder for easy retrieval when the file is exported from Hydrus. A plus of this method is that I can scan the notes with Recoll and find what I am looking for in files with not enough tags to be found by Hydrus. I know, that might be unworkable if you have a lot of files and no time to do it.
How do I change the default tag import options? I can create a particular filter and save it as a favorite, but It only applies to the current url downloader page and I have set it again every time I open a new url downloader page. I can't find the option under either downloading or importing options.
I had a low productivity week, so I am going to skip the release and get more done on Wednesday. v514 should be on the 25th of January. Thanks everyone! ---- For Client API developers, I am clearing out some obsolete calls and data structures in v514! Since we have a spare week, please review the pending breaking changes: * '/add_tags/get_tag_services' is removed! use '/get_services' instead! * 'hide_service_names_tags', previously made default true, is removed and its data structure 'service_names_to_statuses_to_display_tags' is also gone! move to the new 'tags' structure. * 'hide_service_keys_tags' is now default true. it will be removed in 4 weeks or so. same deal as with 'service_names_to_statuses_to_display_tags'--move to 'tags' * 'system_inbox' and 'system_archive' are removed from '/get_files/search_files'! just use 'system:inbox/archive' in the tags list Full documentation preview, including some new commands and parameters, here: https://hydrusnetwork.github.io/hydrus/developer_api_future.html
I noticed a bug where if you have a long note, the duplicate filter's little decision box gets pushed over to the left because the note is taking up the space where the box is supposed to be. >>19110 >I am clearing out some obsolete calls and data structures in v514! I hope that doesn't break the companion extension, but if it does then I at least hope it gets updated quick.
(119.78 KB 1119x458 2023-01-18_01-25-58.png)

Hello, I've been using Hydrus for a bit to manage datasets for training Stable Diffusion hypernetworks and LoRAs. That means I'm using the sidecar feature a lot, since that's how the images must be captioned for the AI (which I've configured with a custom `, ` delimiter in case it matters). My workflow involves downloading images via Hydrus, exporting them and using a model similar to DeepDanbooru to generate tag sidecar files for them, and then re-importing the images+sidecars back into Hydrus for pruning and cleanup. I've noticed an issue wherein sidecar files which contain tags like `:d`, `:<`, `:p` and the like for facial expressions seem to cause issues when they're imported. In the manage tags dialog, such tags will be displayed without the leading colon; double clicking on them (for example to remove and re-add) doesn't actually remove them and instead adds a new entry to the list which correctly displays as `:d`. Double clicking the bugged tag again merely removes the new (correct) tag, effectively making the bugged tag impossible to remove. Right clicking on the bugged tag results in this error being displayed. Furthermore, exporting the images from Hydrus with a sidecar file results in the Hydrus-generated sidecar containing `::d` -- I've used a regex filter in the export tool to fix that, but it doesn't address the issue for imports. Aside from this quirk, the program has been working great for this purpose. Thanks!
>>19112 >Furthermore, exporting the images from Hydrus with a sidecar file results in the Hydrus-generated sidecar containing `::d` I should add, this export issue occurs irrespective of the import one. I can take a completely empty tag domain, manually add a `:d` tag to an image via the UI, export it, and the sidecar file will contain `::d`.
I found a bug in file searches. When I have "system:filetype is pdf, flash, audio, image, video" in the search, and I try to remove it, instead of removing it, it just adds it a second time. If I change the filetypes to something else, then I'm able to remove it from the search.
>>19109 network > downloaders > manage default import options watchable urls are for stuff like 4chan threads, file posts are for stuff like boorus
>>19115 Thanks.
I know i should have got into Linux beforehand but because i'm still a Windowscuck: I fear my PC will change W10 to 11 and it's why i got notifications about preparting shit or backing up my files That transparent "activate Windows" message is still on my screen's corner Even if W11 keeps all the files i had from 10, will Hydrus Network be safe? Because i just got a new PC and haven't finished tagging a shitload of images of my gallery Do i have to create a copy of my current Hydrus' database and find a way to preserve it, when bring it to W11 and a new version of Hydrus? I'm not even sure how big the Hydrus database could be and where i could store it because i don't have an external hard drive
>>19117 Backups, backups, backups! There's ways to disable the Win11 update (google it). You're unlikely to run into any hydrus issues upgrading from Win10 to Win11, but you should be making backups regardless anyways. Make a copy of your hydrus install folder. By default the database is in the hydrus install folder, but if you changed the location of it, obviously back that up too. Whatever build of hydrus you're currently on, be sure to keep a copy of the install files for that build as well. Worst case scenario you reinstall the same hydrus build as your backup. Then copy your backups over the new install and overwrite everything with your backup. That should be all you need to do. If you do a full format and clean install you might get an error about ffmpeg being missing, but that's easily fixed by installing ffmpeg.
(240.73 KB 1508x1000 02d26.jpg)

>>19117 >I know i should have got into Linux beforehand but because i'm still a Windowscuck Unbelievable.
>>19117 You already drank the W10 koolaid, is W11 really that much worse? This shit happens every Windows release.
(6.22 MB 1066x600 de1c34a.mp4)

>>19117 The enemy already is waging war on every Windowsfag. Make your choice wisely.
Twitter syndication broken for anyone else? errors: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) hydrus.core.HydrusExceptions.ParseException: Unable to parse that JSON: Expecting value: line 1 column 1 (char 0). JSON sample: hydrus.core.HydrusExceptions.ParseException: Content Parser next page: Unable to parse that JSON: Expecting value: line 1 column 1 (char 0). JSON sample: hydrus.core.HydrusExceptions.ParseException: Page Parser twitter syndication api profile parser: Content Parser next page: Unable to parse that JSON: Expecting value: line 1 column 1 (char 0). JSON sample:
>>19122 Looks like the Elongated Muskrat deleted the API.
>>19122 >>19123 I read yesterday that twitter is now banning third-party clients. I probably has something to do with that. I hope nitter will be okay, at least.
>>19099 Thanks. It sounds like your head is screwed on straight. Good luck with the tags you are going for, and I'll be interested in future feedback. I'll quickly say that I don't like how you get a million filename or title tags with (1) count in autocomplete results. I've been prodding at integrating my Tag Filter object into the database recently with the server filter work. Once I have that tech nailed down, I'm going to see about letting you say 'don't suggest x, y, z in tag autocomplete unless I type the namespace' or similar sorts of ideas. >>19100 Absent parents again (and not collapsing siblings), but PTR is 1.74bn mappings for 32.1m unique tags and 99.9m files. Similar ~17:1 ratio, which is interesting. PTR is vastly populated by boorus, so we probably follow them for I'd guess 2/3rds of the PTR content, and the other 1/3rd is probably just creator or creator + patreon/yiff/kemono id kind of thing. >>19104 When you want to go, hit services->manage services to get started, and then set up new 'default import options' for 'tags' under network->downloaders. >>19105 Ha ha, yeah, it would be nice to get some rich text, but I'm afraid I just mean proper newline handling for now. There are still several dialogs in the parsing system that don't have multiline 'example text' test areas and so on, or they don't pass on multiline content on properly, or my actual downloader objects collapse all newlines in all cases with no option otherwise. I need to go through the whole system and make it work the same way everywhere so you the downloader maker can do precisely what you want nice and easy (and I need to not break anything along the way!).
>>19106 This sounds tricky, sorry for the trouble. I've hacked on support for 'grab this URL too' in Post URLs, and it might be your are hitting some odd exception here. I don't know enough about your parser or specific intentions to talk too cleverly, and I don't remember the precise details of how new files are added, but here's some general thoughts: Gallery URLs can make Post URLs, but Post URLs cannot make Gallery URLs. Post URLs can make new Post URLs, as 'urls to pursue', and they should get queued up in the normal file import log. A Parser bundles everything it parses together into one blob, so if you add an URL to pursue separate from what you parsed, I think you'll need to use subsidiary parsers. My guess is you are running into the subsidiary parser issue. The 'pursuable' child post URL is probably being grouped with the direct file url and they are competing based on their quality precedence score. (the client thinks they are the same file) Subsidiary parsers are not a nice system, another hack I added, but the basic thing is that everything that is parsed in a subsidiary parser will be its own thing which inheriting everything the parent parser parsed. It was meant to handle situations like where a pixiv page could have 8 files. You'd want to parse the shared tags in the parent parser, and then create 8 sub-posts with those tags that would create 8 new importer objects, one file each. In your situation, I think you'll want to create two subsidiary parsers. One should get the file and its tags on the page. The other should get any child posts. The content of everything parsed here will stay separate and hydrus will know that each group refers to a different file and create a new downloader object for each. I've explained that badly, and the system to do this is shit too, sorry! The overall downloader engine isn't yet great at handling nested files or potential loop situations, which can crop up when URLs can create URLs if similar type. Trying to get child/parent posts is not something it is geared up for yet. Give subsidiary page parsers a go and let me know how you get on.
So im trying to add a new nitter downloader but every one i add keeps appearing under non-functional galleries saying Cannot parse nitter media timeline. Im clearly missing something but i dont know what
>>19107 >>19108 Thanks, this is interesting. I'll add the file_id as a new 'hash type'. Not sure exactly when I will do it, but let me know how it works for you when I roll it out. >>19111 HyExtract and HyShare have been updated for next week. I think Hydrus Companion will be good. I talked to the guy who makes it before I made this post and reversed course on another change I was going to make, so what I've got listed there is mostly pretty old stuff. Anyway the guy said he'd look and see if he can get any needed fixes done this week. I'll get an update from him before I do v514 release post and state there if HC users should wait longer. >>19112 >>19113 Shit, thank you for this report! I'll fix this. Tags that start with a colon in hydrus are secretly stored with the extra colon on the front and just rendered to the user different. Looks like my tag parsing here is messing up, and the code too! I so rarely export tag text, I bet I never wrote a nice 'undo that hack' method. Let me know how v514 works for you--I'll try to cover everything here. >>19114 Wow, thank you for this report! What the hell's going on there? I'll fix it. >>19117 Yeah, backup everything as always. Everything in your hydrus collection is under your control, and Windows shouldn't touch it in any update, but of course if the update breaks or something, you want that spare copy. As the other anon said, just copy your whole install folder somewhere while the client is off. If you installed with the exe, it is probably in 'C:\Hydrus Network'. It is super easy to move a hydrus install from one place to another btw, so no worries. You just move the folder to a new drive/computer/whatever and make a new shortcut to the client.exe. More help here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up Also, Hydrus runs fine on Win11 now. Used to be some problems when they were in beta, that insider preview program thing, but things are now ok. I dev on Win11 these days and have no problems. Hit this if you haven't seen it before: https://www.oo-software.com/en/shutup10
>>19125 >I'm going to see about letting you say 'don't suggest x, y, z in tag autocomplete unless I type the namespace' This seems like a good idea, although some files I still recall by filename. >When you want to go, hit services->manage services to get started, and then set up new 'default import options' for 'tags' under network->downloaders. Thanks.
>>19117 >>19128 Oh yeah and check this for getting Win 7/10 feel back. It is buggy after big Windows updates (just wait a bit and maybe reboot, it fixes itself by downloading new definitions), but it is free and normally does the job great: https://github.com/valinet/ExplorerPatcher >>19122 >>19123 >>19124 Yeah seems like a real shitshow. Afaik we were using an undocumented internal API, but I guess that got changed amidst all this. A user sent me a fixed parser that does user lookups 20 tweets at a time, so I'll integrate that, and we'll keep thinking. Fingers crossed this finally pushes the jap artists to flee and finally post on a real site. >>19127 Check your 'url class links' under network->downloader components. Is the 'nitter media timeline' linked to an appropriate parser? Sounds like your hydrus can recognise the URL type but doesn't know what to do with it. Sometimes if you get a ton of weird downloader parts all clashing with each other, the best thing to do, rather than untangle manually, is to delete all the gugs, url classes, and parsers for that site (also from that submenu) and then just import the newest stuff once. Ideally hydrus will be able to link up the example urls and stuff when it only has one new set of objects to look at.
(1.11 MB 1067x1591 76267643.png)

(4.41 MB 2610x4000 78206785.png)

(12.84 MB 600x450 91273320_p0.webm)

Why do these two APNGs get treated differently in hydrus? The Suwako one gets treated as a normal PNG, only shows the first frame, and the only way for me to know it's actually an APNG is to manually give it an appropriate tag. Meanwhile the Ran image is specifically marked as an APNG, shows up in "has duration" file searches, and has the play icon in the top corner. And in the media viewer, it flickers between the two frames (which is a bit of an eyesore but still the best way to know what both frames are). Also, this webm behaves oddly too. Instead of holding on the photograph, it flicks through it in a single frame, and then makes up the duration by holding the black screen at the end for longer.
>>19130 It wasnt linked to a parser, thanks for the help.
>>19054 >click to increment Came to check the current status on that feature. Would be best to have decrement buttons, and direct access to edit the raw number too.
>>19100 I usually grab files with tags from boorus, but on the files I manually tag I think I get around the same average as you. For porn, all I really care about is who made it, what characters are present, any striking characteristics of said characters (sex, breast size, multiple arms, etc), and what kind of sex is going on. I'm not autistic enough to tag or search for shit like hair color or split hairs on what "vague" tags like vaginal mean.
(2.33 KB 180x216 yohm.png)

>>19134 >I'm not autistic enough to tag or search for shit like hair color I am, but only for girls most of time. I might tag men as grey/white haired or old sometimes. >"vague" tags like vaginal mean. How is that vague? it means something is going in the vagina.
>>19135 >How is that vague? Unless I'm misremembering, there are some boorus that specify what is going into the vagina ("something" is vague). So one tag for if it's a finger, a benis, a dildo, etc. Whether or not those tags are actually used depends on how autistic the taggers are, though. Some boorus don't give a shit about tagging beyond creator and series if that.
>>19130 >A user sent me a fixed parser that does user lookups 20 tweets at a time, so I'll integrate that, and we'll keep thinking. Would you mind sharing that parser? I'm over a year behind on updates so I'd rather just copy the parser.
>>19136 That's silly for a personal collection. Have a tag for penetration of a particular hole, and separate tags for the kind of penetration if necessary. I have tags for fingering, fisting, and unusual insertion, and that's it. Everything else is most likely easily found by separate tags for the presence of body parts and objects, such as dildo, egg vibrator, cock, tentacles, et cetera.
Any idea on when duplicate filtering for videos/gifs could be implemented? My guess is not any time soon, but it would be neat.
>>19059 I want to get off of Mr. Bones wild ride. I've been trying to go through my inbox and use a 5 star rating system as well as a favorites to pick out the files I definitely want to archive and tag first, which I think helps, but still I constantly dig myself deeper.
I had a good couple of weeks working on a lot of different things. There's updated downloaders, a heap of bug fixes, optimisation and quality of life, a significant Client API update, and a new related tags method to test out. The release should be as normal tomorrow. (actually later today)
>>19140 I have about as much files as you, and am about halfway done. >Rating system and favorites I don't use it. Ratings are something I use with online media to tell if it's garbage at a glance. If I've saved something, I've already decided it's not garbage, and if I change my mind, I delete it. Same goes for favorites. How many files do you tag/delete/process a day? I average about about 70, and still estimate I have 7 months to go. Similar import time too. I only add a scant 1-5 new files a day. And you imported everything at once? For my methods, that makes things much more difficult. I only import folders of up to 300 or so files at a time to make mass tagging by similar traits easier, i.e. everything in the kingdom hearts folder can be tagged as ip:kingdom hearts series, many things can be tagged as "sora", many things can be tagged as Larxene. If you import everything at once, efficiently tagging large chunks of files becomes a huge hassle since everything is mixed together. You can use folder tags during the import, but I don't like having those, and there's often too many subfolders for it to be properly useful anyways. Don't worry anon. Focus on building good habits and becoming more efficient, and you can make it.
(16.12 KB 617x454 client_KnyShF98jt.png)

(252.76 KB 418x450 client_qC0dUeqzL1.png)

>>19059 >>19140 I can't be unbone
>>19143 12 million??? have you ever done duplicates processing???
>>19143 Why have an archive when you can have an inbox instead?
>>19143 v496? updoot that client
>>19146 I can't, i use windows 7 (and don't want to update to 10) and i'm too retarded to use other OS or know what running from source even means.
>>19141 Hopefully, a DeviantArt downloader? :p
https://www.youtube.com/watch?v=PjfkAuUUq5w windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v514/Hydrus.Network.514.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v514/Hydrus.Network.514.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v514/Hydrus.Network.514.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v514/Hydrus.Network.514.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing a variety of work. The changelog is long, mostly fixes and improvements that don't change much, but if you use the twitter downloader or the Client API, make sure to read before you update. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html twitter Twitter changed their policies regarding third-party interfaces recently, and it seems as part of that they took down the 'syndication' API our nice twitter downloaders were using. None of them work any more! A user has kindly figured out a patch, which today's update will apply. It will throw up a yes/no dialog asking whether to do this, which almost everyone will want to click 'yes' on. Unfortunately, this downloader can only get the first 0-20 tweets in the latest 20 tweets and retweets an account makes!! That's it! When you update, please check your twitter subs to make sure they moved to this new downloader correctly, and you might like to tell them to check more frequently. I expect twitter to continue in this direction. I don't know if nitter or any other mirrors will be a viable alternative. If any of your twitter subs get posted in any normal booru, I strongly recommend you pursue those instead! Client API I am retiring some old data structures from the Client API today. If you use hyshare, please make sure you update to 0.11.2 or newer. If you use hyextract, please update to 0.4.1. If you use Hydrus Companion, I think you'll be good with the latest release for everything. The main changes on my end are to do with tag-viewing and tag-editing, so if you do a lot of that, make sure you have a backup and do a test of everything after you update. If you have custom scripts, check the changelog. Mostly services are going to be referred to with service_keys in future, so I am removing 'service_name' structures and only supporting 'service_name' arguments in a legacy state. Please move to the /get_service(s) calls to figure out the service keys you need to use, as I will slowly retire the last legacy support later this year. Note that all users can now copy their service keys from review services. As a side thing, I also rewrote how v413's /set_file_relationships call works. The old 'list of lists' parameter should still work, but a user reminded me that an Object with proper names would be a lot easier to extend in future, so I just rewrote it. This call is sufficiently new and advanced-user-only that I'm ok changing so quick, but if you set something up for it already, check the docs! related tags EDIT: In final testing, this stuff was working slow IRL for me, so I reduced its search space for today. You are going to see a lot of 'search domain is too large' on the PTR. I'll keep working! Thanks to a user who sent in a great proof of concept, the 'related tags' column (which shows in manage tags and you can turn on under options->tag suggestions) has some new buttons to test. I called them 'new 1' and 'new 2'. The first works fast but may have limited results on repositories, the second--which only shows on repositories--can only search small-count tags but gives deep results. Secondly, if you select some tags in the main tags list, the 'related tags' search will now only search those tags! If you can't remember a character name, then select a 'series' tag and click for a related tags search, and you'll get a bunch of them! Thirdly, these new search buttons now work on manage tags launched from multiple thumbnails! Please give this a go and let me know how you get on with IRL data. The quality of the new search results is better than my old routines, but I need to do a bunch more tuning and optimise their search. Are there too many recommendations (is the tolerance too high?)? Is the sorting good (is the stuff at the top relevant or often just noise?)? Which source tags (e.g. 'page:x') give the worst suggestions? I'm sufficiently happy with the new performance, particularly with the tag selection, that once this is working nice, I think I'll make it default-on for everyone. Lots of people missed this system, and I always thought it was a bit jank. other highlights I wrote some sidecar import/export help here! https://hydrusnetwork.github.io/hydrus/advanced_sidecars.html
[Expand Post] I optimised the how the thumbnail cache and downloader import queues do their normal work, particularly when they get tens of thousands of items. Big clients, which may have very busy versions of these objects, should be a little less laggy. Let me know if you notice any difference! Downloaders that get from multiple sites in one query (aka NGUGs) now do their searches one site at a time, not interleaved (e.g. they now search sites A and B as AAABBBB, rather than ABABABB)! next week I've been semi-ill this past two weeks with some long-running something, and I pushed myself a bit hard getting all this ready in time. Next week will be some simple code cleanup, nothing too clever!
(417.00 KB 640x640 based2.gif)

>>19147 >i use windows 7 Based.
(78.34 KB 1280x906 as.jpg)

(508.01 KB 3573x3477 av.png)

>>19143 I'm speechless.
(124.34 KB 349x331 spyro smoking.png)

>>19143 >8TB >12M files There's no way this is all personally curated. You have to have mountains of trash you should be deleting from some mass downloading you did without any quality control.
>>19128 >Let me know how v514 works for you--I'll try to cover everything here. Just confirming that v514 appears to have resolved the import/export issues with tags beginning with colons. Thanks for the quick fix!
(872.90 KB 1438x1238 6MX79C3[1].png)

>>19128 >>19153 Ahh, unfortunately I may have spoken too soon. Importing is fixed -- sidecars with ":o" are correctly imported into Hydrus and display correctly in the manage tags window as ":o". Exporting still seems like it may be bugged; I didn't notice initially because I had a string processor set up in the export tags window to replace double colons with single colons. When I removed my regex workaround, the exported sidecars show "::o".
(107.95 KB 594x339 3fusxm.jpg)

>>19149 >twitter broke my likes subscription Goddamnit
Hi dev-sama, excuse me if this is a dumb question/this feature is already implemented somewhere. I have a bunch of gallery-dl collections of Twitter profiles (i.e. a folder structure of gallery-dl -> <username> ) that I'd like to import. I know I can import a folder into Hydrus and use that for tagging purposes, but what I want to do is import a folder and then assign a "twitter-id:<username>" to that folder's files. Related, I'd also like to customize the behavior of Hydrus' built-in downloader/tagger, so instead of creator:<username>, it's twitter-id. I assume this one is a bit easier because it would be related to the parser itself (?) Is this possible with a bit of tweaking of the settings? I didn't want to break anything making an attempt, so I thought I would ask.
>>19156 I should probably clarify that I'd want to do the first thing automatically, rather than have to import one user at a time and then manually tag it, if that makes sense. (Obviously I'd do it if it came down to it)
>>19149 Would it be possible to have a toggle that would disable automatic reading of potential duplicate counts when the duplicate tab is in focus? I have millions of potential duplicates, so the client lags for a couple of minutes whenever i open my duplicate tab.
>>19158 and adding on to that, would it be possible to have an option for the client to stop searching after some amount (say, 1000) of duplicate pairs? I get a similar lag after sorting through one batch of potential duplicates, which I suspect is the client re-counting all the potential duplicate pairs from scratch
>>19159 just to clarify, during this lag the GUI is still responsive and I can still open media fine, but since the db is read locked, I can't rate or import files
>>19156 Idk what you mean with the first. Never imported files from a folder into Hydrus. For the second: network > downloader components > manage parsers > parser you want > content parsers > add > do your thing, you can do the fetching to see if it works before you save it Make sure you're very sure about saving your parser changes, with great power etc.
>>19156 For the first part, can't you just use the folder import namespaces? File>import folder>add your folder>add tags/urls with import>check add last directory and type twitter-id into the text box.
(17.54 KB 1298x244 1.PNG)

(40.82 KB 1829x707 2.PNG)

(45.23 KB 469x615 3.PNG)

(22.74 KB 685x532 4.PNG)

>>19156 1. Drag and drop the folders onto hydrus, and the file import dialog will open. On the bottom, click "add tags/urls before the import" (pic related #1). On the next screen, check "add last directory" and put in the namespace you want (pic related #2). You'll see a preview on the top of the dialog. Then just click apply to start importing. 2. First, update to v514 >>19149 to get the most recent twitter parsers. Go to network > downloader components > manage parsers Double click on "twitter syndication api tweet parser" to edit it (pic related #3). Go to the "content parsers" tab, and double click on "username" to edit it (pic related #4). Then change the "creator" namespace to whatever you want instead (pic related #5). Finally, make sure to click apply on all the windows you have open to save your changes.
(12.67 KB 745x293 5.PNG)

>>19163 Pic related #5.
>>19055 >qss thanks yeah it worked. I'm having the same problem with the little hint text in the text boxes. it's black so I can't really see what it says. Do you know what value I'm supposed to edit for the color of those little placeholder texts?
(340.04 KB 2785x3211 altofattits.png)

(814.89 KB 2785x3211 uncommon time_books and panties.png)

Any idea why duplicate filtering can't tell these two files are similar at any distance? On Hydrus 511.
>>19163 >>19161 God bless you anons.
Seems Kemono has implemented DDoS-Guard, and Hydrus is now being 403'd. RIP
Don't ask why, it's a rabbit hole of non-sense. I want to tag every file that I import with it's full directory seperated. Eg. /home/anon/Pictures/HorseDick.mpeg -> home & anon & Pictures & HorseDick.mpeg I'm trash with my regex and i've been at this for a while trying to get it to work. Can someone please write a string for me that can do this?
Tip for anyone who downloads from Sankaku: The downloaders available for Hydrus suck at actually downloading the files. The best way I've found to do it is download via gallery-dl, import into Hydrus, and then run a search for those files to grab the tags. It's probably possible to tweak the gallery-dl settings to grab metadata and format it in a way that Hydrus can import alongside the files themselves so you wouldn't have to run the search in Hydrus, but I haven't tried.
>>19170 Why?
>>19172 bad data management. long story made short, damn near senile parents need help and I can use hydrus to take: "/DISK/apples/fuji/year/month/day/comment/another-comment/unrelated-non-sense/THE-ACTUAL-FILE" and use the individual dirs as tags to sort through more easily. Best explaination I have.
>>19173 They even managed to somehow in the past, move each file to a directory structure in line with the atime of each file. "/Data/Year/Month/Day/the-original-path/the-file" Everything is scattered and duplicated. So why not throw it all in a duplicate removing database? I don't wanna break anything, again i'm no good with regex, so can someone help? I don't like to ask to be spoonfed but I have no other idea than to ask the official hydrus thread on 8chan.
I tried the new related tags button (new 1 specifically since I don't use the ptr) and while I'm still getting a feel for it, the immediate thing I noticed is that it still doesn't resolve tag relationships, so I'm getting a lot of parent tags of tags already on the file as suggestions, which are pointless, and a lot of equivalent siblings as well, that are probably dragging down each other's "score" by splitting it between them instead of just being counted towards the ideal sibling's score. Also, a lot of the times these results seem to make up the majority of the suggestions, so until it properly handles tag relationships, I don't think I'll find it useful. I'll definitely be waiting for when you eventually get around to fixing that issue though, then I'll probably get a lot of use out of the tag suggestions! Hopefully that'll be sooner rather than later, but I'm glad that you at have your eyes on that feature. I was under the impression that related tags was deprecated.
>>19175 After a few more minutes actually, it looks like this "new 1" does resolve siblings specifically, but not parents. My mistake! This is definitely an improvement over before, but I guess I underestimated how much of the issue was because of parent/child relationships rather than sibling relationships, so unfortunately the suggestions are still in bad shape until those are able to be worked out when calculating suggestions as well. It is still better than before though, so it's great to see it getting worked on! I haven't used related tags in a long time though, so I don't actually know if this is something you just did or not.
>>19171 Actually, it's not that hard. You just have to get the access key, which last about 2 days, and stick in headers. Then, you need to be aware that the URL's generated only last for an hour. So, you will have to download a few hundred files, then start again at the id you left off from, using id_range. They've made it complicated just because of programs like Hydrus.
>>19177 What is impossible right now is to be able to download a tag search from Deviantart. The API does not support it in Hydrus, Hydownloader, or Gallery-dl.
(37.95 KB 248x628 before.png)

(46.65 KB 246x532 after.jpeg)

>>19171 >>19177 Speaking of sankaku, they recently changed how the tags look. Pic 1 is how the tags looked (no underscore) at least 3 days ago and pic 2 is how the tags are looking now (with underscore). The problem with this is that if you currently download an image from sankaku it will also download the tags with the underscore.
>>19177 Gallery-dl doesn't need access key fuckery (user/pass works) or refreshing searches, doesn't randomly stall at the same place during download repeatedly on certain files, and downloads much faster in general for some reason. Granted you still need to do the access key bullshit inside Hydrus to grab metadata from sankaku after importing them. Hydev, is there a way that tags and URLs etc. could be imported via a text file with the same name as the file? Gallery-dl has very customizable plaintext metadata output and it would be nice if I could just import them with metadata and not have to worry about API keys.
>>19180 >Hydev, is there a way that tags and URLs etc. could be imported via a text file with the same name as the file? https://hydrusnetwork.github.io/hydrus/advanced_sidecars.html
(10.14 KB 387x195 Capture.PNG)

>>19170 You can add the filename as a tag on the "simple" menu on the right side. For the folders, try this in the "advanced" menu. (?<=\\)([^<>\/:\|\?\*\"\\]+)(?=\\)
>>19179 >The problem with this is that if you currently download an image from sankaku it will also download the tags with the underscore. Just add a string replacement to turn underscores into spaces in all the tag getters in the sankaku file page parser. Here, I fixed it. While I was at it, I noticed that the "rating" tag parser had been broken for a little while so I fixed that as well.
>>19175 >>19176 Okay so I played with it for an hour or so, comparing "new 1" to "thorough". I get the feeling that new 1's suggestions are less often tags that actually apply to the file, but when it's suggestions are correct, they're often more specific tags or ones that more uniquely apply to that file over others. a lot of the suggestions that thorough gives are tags that apply to many files, that just haven't been added yet to this file. That's good of course, but it mostly feels like it tries to suggest a similar small selection of very broad tags to most files if those tags aren't already on the file. "new 1" tends to suggest much more niche and specific tags, and in that sense, it does a better job of helping you pick out what makes a certain file stand out from the rest, and what tags it might have that wouldn't apply to as many other files, but this seems to come at the cost of its suggestions being noticeably less accurate (they're more often wrong) than "thorough". 1 sort of "bug" I noticed with "new 1" is that it's suggestions seem to depend much more heavily on similar files (in terms of tags) to the one you're requesting suggestions for to be themselves, well tagged, and not just all having the same small set of tags that you might get from a downloader. so for example I'll have some files with maybe a "site:" tag, and then maybe just something like "filename:" and "male" or something like that, and basically all of the suggestions that "new 1" gives, will just be a flood of "site:" tags that are completely irreverent or just suggesting "male" or "female" or "title:" tags or "uploader:" and basically just a bunch of obviously wrong tags that the other files have from a downloader, or very generic tags like "solo" or "long hair" and nothing else. This is also a problem that "thorough" has as well, but the issue feels more extreme when using new 1. I have no idea what the actual difference between the 2 suggestion algorithms are, so this is purely based on my experience after comparing the 2 for a little bit on some different files.
>>19180 I think there is an option in gallery-dl. It's like --write_file, or something like that. It creates sidecar files.
>>19185 You may still run into the "URLs only valid for an hour" problem though.
Hope you guys can figure out how to download a tag search from deviantart one day, because that's the one that has me stumped.
>>19175 >>19176 >>19184 Okay probably the last thing I have to mention about the suggestions for now. It looks like "new 1" actually doesn't resolve siblings after all? I'm noticing some bad siblings popping up as suggestions now. I'm confused about why I didn't see them before, but that's weird. I don't get it. I think maybe a good way to deal with this whole situation, if this is possible, is to form the tag suggestions based on the tags that are displayed on all files, rather than the tags that are physically applied directly to the files. If you do that, then old siblings will always get fed into the algorithm as good siblings when it's checking for the tags are similarly tagged files, and parent tags will never be suggested, because the suggestions will see when it looks at display tags, that the parent tag is already part of the file. This should be basically a perfect solution to the problem. Just have the suggestions see the tags the same way we do.
>>19183 Am I supposed to know where that is
>>19131 I'm sorry to say that most of the explanation here is the boring 'those are some weird/borked files'. I am not sure I can get the apngs to show 'properly' in any program, although Firefox seems to do it best. It seems to flip to the second frame immediately, so could these images be some sort of troll design, or for specific imageboard posting? I wonder if the Suwako one in particular is supposed to be clicked on to make it big, and then through apng chunk trickery, it is supposed to flick to the second frame and linger there immediately. There was another 2hu one I remember, where her skirt is lifted, and when you click on it she is holding her skirt down, embarassed. But that used an old png thumbnail-swapping trick that either used some kind of thumbnail chunk or a clever 'bury your thumbnail along this precise grid' hack that knew how the algorithm that made the thumbs chose its pixels. (I am not an expert, but I have done work here) The reason why the first file seems like a png to hydrus is that it has an animation chunk (acTL), but that chunk says it only has one frame. Then, in the lone frame control chunk (fcTL), it says the frame duration is 49275 / 10 seconds, or about an hour and twenty minutes. I don't know enough about png structure, but I guess a static png has one chunk, and that is what is generating the thumbnail, and the lone animation control chunk is showing the other. So it is somehow both a png and a 1-frame apng. This is probably some advanced hackery that exploits Internet Explorer or other Jap-popular browser and ensures the surprise click-frame stays on screen indefinitely. Unfortunately, if I force it to have two frames, it still spergs out when I try to load it in my native viewer or mpv. It doesn't actually have two frames, and the super long duration mixed in too just makes the scanbar work bad. The second file says it has two frames in the header, and it has two frame control chunks. Both say their duration is 1/25th of a second, which gives you your 80ms total. In Firefox, it stays on the second frame, so maybe this is a similar 'click-through' design/hack, but maybe mpv (or hydrus, more likely) isn't clever enough to see some sort of 'do not loop' flag set in the apng data. The webm I can't talk about with any expertise. I just hand it off to ffmpeg and mpv. For me, mpc-hc and Firefox play it correct but mpv does not. My guess is there is a special header in the webm spec that says 'do not make any more changes for this duration', and mpv can't understand that. So, in the first case, I think the file is too busted/weird to properly support. The second, I think I may be able to support it better if I can find 'do not loop' in the apng spec. The third, hopefully that gets fixed in some future version of mpv that we roll in. Thanks, these files are interesting!
>>19133 Got bumped, I'm afraid. End of year kicked my ass, and just yesterday I had a heap of IRL bullshit fall on my head that is going to fuck with my schedule (light weeks, maybe some skipped releases) for at least six weeks. No complaints, keep on pushing. I have these inc/dec ratings in my 'would be great on a medium-size job week', which happen every four weeks. I can't promise which month this will happen on, but I am thinking of it. I'm thinking how I would use it myself, too. >>19137 I'm afraid I can't export you the downloader, I don't think, as a v514 client's exported objects cannot be understood by a client more than a year old. Try moving to an older nitter parser, is I think your best bet. >>19139 Another 'medium-size job week' job, although it might be larger than that since I think I'd probably have to re-wangle the filter somehow to better show potential duplicate vids. Maybe side to side, maybe something cleverer like a sash you move back and forth. But I do have a pretty good idea of how to do dupe video detection, so I may just throw them in with all the current UI and see what happens. That said, I added dupe tech to the Client API just this month, and I know some guys are working on external video dedupe tech. Might be they can throw together a nicer solution than I can. Watch this space? >>19140 >>19143 Extremely based, thank you both for using my program. I've still got it in mind to add arbitrary file searches to Mr Bones and the File History chart. It'd be nice to see average size of our webms and stuff.
>>19147 Running from source is super easy now, btw. I have a bunch of help here that tells you everything to do, as simple as double-clicking a couple batch files, if you ever want to check it out: https://hydrusnetwork.github.io/hydrus/running_from_source.html If running a program on your computer is like telling it to read a story, then running client.exe is reading the published (and maybe translated) and non-editable book from a store, whereas running client.py (the source), is like opening the original word document in the original language. In the latter case, your computer can figure out any translation you prefer and font size and so on, and you can change the cover or even edit a chapter if you want to. Win 7 can't read a Win 11 book, but it can probably look at the original text and figure out a good presentation. >>19148 Yep, it is this one >>19092 rolled in! Working well for me so far. >>19153 >>19154 Ah, damn--I wrote in a thing to explicitly not do that. I will check it again, thank you for the report. I have a couple reports that the 'fix invalid tags' routine is busted for some of these tags too. Aaaannd someone asked about 'D:' as a tag, which will probably have to get the ':D:' 'secretly an explicit un-namespace' treatment too. >>19155 I haven't been following the close details of it, and I know nothing of the truth of the man, so I don't feel confident declaring one thing or another, but it sure feels like Elon is floundering right now. I use tweetdeck to enjoy the poasters and bants of the day, so if they fuck that--or, God forbid, force us to use the new Tweetdeck--I'm going to rethink and probably play more vidya or something instead. Someone pointed me at the most popular open source mastodon instance or whatever the fuck it is called, and their list of top posts was a litany of political bullshit denouncing 'inappropriate content' on other servers, discussing technical solutions to fence off hate et al in the new Fediverse, and celebrating the latest codes of conduct they'd just pushed on the next project. So I don't think I'll join that server. I might jump over for regular bants and hydrus work posting if there's a sufficiently legit 'you can say anything Anon' shard or however it works out there, but the inertia of moving to a new platform is a huge barrier for me, and for hydrus, I don't need to post TOS-violating stuff, so the availability and overall popularity of twitter is helpful for my simple status update posts. We'll see what happens!
>>19156 Sounds like you are good on the parser. For the folder-tagging, this guy >>19162 is correct. When you do a manual import, click 'add tags/urls with import', and there's a whole complex tagging dialog that can pull tags in a bunch of ways from the filenames. Click the service name tab up top, probably 'my tags', and then on the 'simple' tab on the bottom end, the 'misc' panel, there's some easy 'all last directory (namespace)' stuff that'll sort you for simple tags like this. You can set these rules up in a file->manage import folders import folder that looked at the parent directory, too. Unfortunately, as I say in the last release post, our twitter parser just got broken and I don't think we'll get nice functionality back any time soon, so if gallery-dl can still do good searches, I'd recommend sticking with that as much as you can. >>19158 Yes, sure. >>19159 Yeah, great idea, let's see if we can. That sounds like it should make sense, and you don't care about the precise count anyway. I should probably have a default of 100,000 here or something, since who cares about the precise count, and we can save a bunch of time on big domains. I'll see how I actually do this query, but I feel like I can wangle this to work. >>19165 It looks like placeholder is half way between the background-color and the color (which means the text), of the normal (no #Something state) QLineEdit class. So if you set a bright 'color', the placeholder should pop more. EDIT: Although, by your picture there, how did it get darker? Is the text you type there black or bright? When I put in red text on a white background as a text, the placeholder is pink, so I figured it was half way. Play around with background-color and color, is my best suggestion. There doesn't seem to be a specific thing for it. These are technical but you can ctrl+f them, if it helps: https://doc.qt.io/qt-6/stylesheet-reference.html https://doc.qt.io/qt-6/stylesheet-examples.html#customizing-qlineedit >>19167 I am sorry to say that those do match for me. If I hit open->similar-looking files->exact match on the thumbnail menu, I get both. Do you not? Looks to me they are actually pixel duplicates too. Stupid question, but if you are looking in the duplicates page, are you completely 100% searched on your 'preparation' tab? I often (like just now) get confused because I import two files, give them a tag to find them, and put it in an already-open dupe search page to find them, but they don't turn up as potential dupes because my preparation tab is at 99%--it hasn't yet done the two files I just imported. If you are fully searched and it still doesn't work, this is a shot in the dark, but try database->regenerate->similar files search tree and then search again. >>19169 Many such cases. Ironic, too, given what they do.
>>19193 >I am sorry to say that those do match for me. If I hit open->similar-looking files->exact match on the thumbnail menu, I get both. Do you not? I was using speculative search since it rarely turns up non-dupes, but it was probably the preparation tab.Thanks.
>>19175 >>19176 >>19184 >>19188 Thank you, this is well written and useful. I'm slightly hampered here by the late-breaking speed change I put in. Roughly, any tag with more than about 25k count in your collection won't work in the system atm. The count was like 450k or something when I first tested IRL, but it chugged too much on the PTR. I'm going to reshape the whole thing so it samples everything and then cancels after a set time delay, like the original buttons did. 'new 2' searches the whole 'all known files' domain of the PTR, which gives results for files you don't have, but the tag counts are astronomic. I'll make it cull already-existing implied parents, that's a good point. All the suggestion boxes should do this. I had some small trouble with siblings here. The guy whose work I was going on worked in the 'display' domain, which has tags corrected to their ideal siblings and no bad tags show. But manage tags works exclusively in the 'storage' domain, which has both bad and best siblings. I cobbled together a system, so while I don't think it is supposed to show any bad sibling suggestions, maybe something is getting inserted or swapped in wrong. It would be nice if the suggestion boxes were better about showing sibling and parent data too, like the main tag list. But that's something for later. First I'll fix this sampling method, then we'll take another look. At the moment, with it only fetching results with low count, you are getting more on the scattershot 'red ribbon' suggestions and less on the rich 'ayanami rei' ones. >>19178 >>19187 Yeah, when I looked at I think Gallery-dl's entry for DA tag-search, they said they were suspending it because DA's actual 'tag search' page is ~apparently~ fake now! It is just a rejiggered facade of their 'popular' results. Now, I'm not sure how totally true that is, because when I poked around, it seemed like I could get some nicely paginated reasonable looking results. Some had date orders all whack, but some (maybe low-count tags?) looked like the old system, all ordered like a booru. Might be they were switching the system over when I was looking. I'm not a big DA website user, so I can't talk too cleverly. I'll trust the Gallery-dl guys had a good look and came to the right conclusion. But whether there is or isn't a true 'sort by newest first' tag search somewhere, one thing for sure is that they moved to dynamically loaded phone tech bullshit. There's OAuth stuff (a popular corporate-friendly API access standard I hate) there too, which I am going to have to bite the bullet on in the next year or two. >>19180 >>19181 Literally wrote that help like four days before your post, ha ha. Just in time. I don't use Gallery-dl IRL, so if you figure out the pleasant routine to export/import here and feel like writing a couple paragraphs and a screenshot if it is needed, I can tack that on to the end of this help as an example. >>19183 Thank you! I generally consider the sank downloader to be dead now in hydrus, since so many users get Cloudflare'd, but I'll roll this into the defaults so people who can still use it can get these fixes.
(857.56 KB 1067x1591 output.png)

>>19190 >so could these images be some sort of troll design, or for specific imageboard posting? Yes, exactly. I've got a few files like that, some that were posted by others but some I've made. When I make them I use "apngasm output.png input*.png /l1 /f", where /l1 sets it to loop once then halt, and /f means skip the first frame, and that gives the same behaviour I usually see with this type of trick image so I assume others are doing basically the same. The intent is that when posted somewhere like an imageboard, the non-apng thumbnail is input1.png, and clicking it to show the full view displays input2.png. It looks like the /f is what prevent Hydrus from detecting it as an APNG, since that would be why it says it only has one frame. I grabbed the original images from the Suwako one, and made a version with just /l1, and that seems to behave the same as the Ran image (though it's not until I upload it that I'll know if it works as intended in practice). It won't help with these images when made by others, but when I make them myself from now on I'll make sure to omit /f. >The second, I think I may be able to support it better if I can find 'do not loop' in the apng spec Actually, I think it's fine as-is. Like I said, it is a bit of an eyesore to have it flashing back and forth but at least that shows you both frames. The trouble with these trick images is that viewers that don't support it will only ever show the first frame, and those that fully support it only show the second frame. I guess if Hydrus did have full support, then the thumbnail would show the first frame and the preview in the bottom left would show the second frame, but the current behaviour makes it obvious. The main reason I posted the second one is because of the difference in handling. >For me, mpc-hc and Firefox play it correct but mpv does not. My guess is there is a special header in the webm spec that says 'do not make any more changes for this duration', and mpv can't understand that. Interesting, viewing it externally in mpv handles it fine for me, except the scanbar moves irregularly. That webm was downloaded from a Pixiv ugoira post with gallery-dl, and I have no idea what gallery-dl does when invoking ffmpeg behind the scenes, but I've read that ugoira can be a really awkward format. On closer look it's not only the hold on the photo that's missing, but nearly every still moment gets treated like a single frame in Hydrus's player. It must be making heavy use of a variable framerate. Actually, when I check the upload instructions for Ugoira, it says >If you'd like to create an Ugoira from PNG or JPEG files: [...] You are able to set the display time (the frame timing) individually for each illustration you add. So that is the problem. The frame timings have all been set manually, and are highly irregular.
>>19196 >it is a bit of an eyesore to have it flashing back and forth but at least that shows you both frames You can pause gifs and apngs by clicking on them and can scroll through their frames with a little bar on the bottom. I love it. I never had, nor really looked for a program to view gifs like videos, and use to always have to open them up in a browser just to view them. No longer will I feel compelled to continue watching a long gif so I don't have to start it over.
>>19197 I normally view short gifs in my usual image viewer (Nomacs) and for long ones that aren't just simple loops I open them in mpv. Aside from ones I've imported into Hydrus, of course.
>>19195 >1140,11429 Yeah, I messed with Gallery-dl, and all I get is the same pics of pets and such from their popular tag, no matter what tag you search for. According to Gallery-dl, the oauth api system Deviantart is using does not support tag search. It was working in Hydrus up to just before you removed the downloaders, because it then stopped working, but your right, it was kind of a "roll of the dice" which dates you got first.
>>19199 I mean, the Hydrus downloader for Deviantart stopped downloading stuff a week or two before you removed it, giving some weird http error code. 20something.
After updating to 514 the "ask for tags then send to hydrus page 'hc'" hydrus companion option doesn't work properly. Now I have to send the file to hydrus before I can add tags through HC.
>>19181 >>19195 Will check this out. I thought it existed but didn't know it was advanced enough to parse JSON. If I can get it to parse the default gallery-dl JSON format I'll try to write up a basic how-to. >>19186 Gallery-dl gets past that as well somehow. t. backed up entire tags on Sankaku with it over the course of several days with a single search
>>19193 >Is the text you type there black or bright it's bright >When I put in red text on a white background as a text, the placeholder is pink, so I figured it was half way here's what it looks like when I do the same. The placeholder text looks identical to text that you type. I tried messing around with the color and background-color but it always ends up like that, where the placeholder and non-placeholder text are identical, which is distracting. I then tried only setting a background color and not "color" and it went right back to the placeholder text being a very color white the normal text was light. I tried looking at those 2 doc links, but nothing there that I tried worked either. I don't know how to fix this, or even what's causing the problem. while testing things, I also tried changing the "Qt style" to windows, then when that did nothing helpful, I switched it back to what I had before, the default which is "fusion", but then I got an error message. Could this have something to do with the problem?
>>19195 >I generally consider the sank downloader to be dead now in hydrus, since so many users get Cloudflare'd I haven't had any issues downloading from sankaku. Looking at my network data settings, all I have set up for sankaku is a User Agent header. But I don't do sankaku tag searches through hydrus, just file post downloads, so maybe that has something to do with it.
around when is arbitrary file support planned to be added? I know that's it's probably not easy to add or it would've been already, but I'd like just a rough estimate if you could. I'm a big fan of emulation and rom-hacking, so I have a large collection of roms and isos. It'd be great if I could use hydrus to organize them!
Hydrus started freezing up on me today. When I try to add tags, none of my recent tags show up, and nothing comes up when I search for tags. Also had my files not showing up one page (I could click on them, but the page was blank). Then the client freezes and I have to force close it. I'm going to try downgrading to 513 after I finish doing an integrity check. I've been on 514 for a few days and it was working fine this morning, so I'm not sure what's causing this. (I'm on arch btw)
>>19205 >I'm a big fan of emulation and rom-hacking, so I have a large collection of roms and isos. It'd be great if I could use hydrus to organize them! If a file type is unsupported, just stuff it in a zip file and then tag the zip file as the filetype within it. It's what I do for .txts.
Is there a way to see if tags, parents, childs, and siblings you've committed were accepted or not?
I unfortunately had a light week. The new 'related tags' search is polished and turned on by default for all users, and I fixed some small bugs. The release should be as normal tomorrow.
>>19206 So the integrity checks were all ok. I made a backup and tried downgrading but had the same problem. Then I managed to get hydrus to close normally and now it working again. I guess I'll stay on 513 for now, but just in case I need to use my old backup from a few days ago, is there an easy way to get the tags/URLs for the files I've downloaded since then?
I just updated to the most recent version a couple days ago, now im getting constant 403's when I try to download anything from sankaku
https://www.youtube.com/watch?v=Q4brYfcHQR0 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v515/Hydrus.Network.515.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v515/Hydrus.Network.515.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v515/Hydrus.Network.515.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v515/Hydrus.Network.515.-.Linux.-.Executable.tar.gz I had a challenging week with little work time, but I polished last week's related tags work. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html related tags I've improved the 'related tags' test I did last week and brushed up the UI, and I'm happy enough with it that I am making it default-on for all users. You will see it after you update in your manage tags dialogs. If you do not like it, you can turn it off or customise its timings under options->tag suggestions! For the technical details, I've cleaned up the new algorithm a bit, made it run faster in all cases and massively faster for very large-count tags, and now it works on the same 'cancel after 1,200ms' tech the old buttons ran on. Those buttons now use the new algorithm exclusively, there's a new label that reports how particular searches went. There's also a 'my files' vs 'all known files' on/off button to change the search domain ('new 1' vs 'new 2' in the test last week). Also, if your suggested tags are set to show 'side-by-side', they now get nicer boxes around them, and if they are set to show in a notebook, you can set which is the default page under options->tag suggestions. Let me know how you get on with it! next week Unfortunately some stressful IRL landed on my head this week, and I'll have some extra responsibilities for the next 6-8 weeks. Nothing dreadful, but it will cut into my work time a bit. Please expect some more light and possibly skipped releases for the next bit, thanks! Next week I'll see if I can just do some simple code cleaning and more small jobs.
(49.37 KB 530x365 xcv.png)

(80.05 KB 672x691 xfvgr.png)

(575.71 KB 1000x1000 xqaz.png)

>>19212 >I've improved the 'related tags' test I did last week and brushed up the UI, and I'm happy enough with it that I am making it default-on for all users. Absolutely awesome. And the developer message at the corner is a nice touch. Thank you so much OP.
(69.91 KB 512x528 ohhhhhhh.jpg)

>Traditional file searching:Using filenames and folders to find files >Hydrus:Using tags to find files. >Hydrus 515:Using tags to find tags to find files
>>19212 Since you're working on tag suggestions right now, could you consider allowing us to have a blacklist for them so it won't suggest tags that couldn't possibly be useful for it to suggest? Like "creator:*" "title:*" "source:*" "meta:commission" etc. Thanks in advance!
>>19212 >related tags Finally it works a bit more like you'd expect. It wasn't that useful before tbh! Two suggestions: - update the search as you add more tags - default page option: last page used (so hydrus remembers after you close the window)
>>19214 >Hydrus 515:Using tags to find tags to find files I believe it is called "refining the search"
>>19212 >>19215 I'd ask the same thing, personally i think tag suggestions is most useful for quickly looking up character names that you cant quite remember by getting suggestions for a series or title so being able to blacklist stuff that you'd never really want recommendations for would be nice. As an example, i was just tagging an image of Komi Shouko and Osana Najimi but i couldnt remember the name of Osana. And then when i checked the tag suggestions for series:komi-san Osana didnt appear because i guess a bunch of descriptor tags or whatever you would call them like big tits, solo female, etc filled most of the list. So having a button next to the qucik, medium and thorough that just shows characters would be nice, especially for series with lots of different titles and spinoffs like Touhou or Fate where if i can remember the specific game/vn/whatever but not the name of a character i can just get the tag suggestion without having to close the manage tags window, search and find the character in another tab and then go back which would be faster.
>>19181 >>19195 Before I spend lots of time perfecting a gallery-dl JSON parser, is it saved? If I do come up with something that works, will I be able to easily reuse it or would I have to remake it each import? Also, could I share it easily like a downloader?
Hmm, I was just looking at the api for Deviantart. It takes me to this page - https://www.deviantart.com/developers/rss which gives me the html code for the tag frogs. Picking through it, I can get the link for a page "poison dart frogs" - https://www.deviantart.com/pikaole/art/Poison-dart-frogs-793430043 and then a link for the actual pic - https://images-wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/99f56b87-b792-4bdb-aa39-afad37493ec4/dd4dyjf-460d0204-86ea-4804-a9ac-8e6e6d83ba80.jpg?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOjdlMGQxODg5ODIyNjQzNzNhNWYwZDQxNWVhMGQyNmUwIiwiaXNzIjoidXJuOmFwcDo3ZTBkMTg4OTgyMjY0MzczYTVmMGQ0MTVlYTBkMjZlMCIsIm9iaiI6W1t7InBhdGgiOiJcL2ZcLzk5ZjU2Yjg3LWI3OTItNGJkYi1hYTM5LWFmYWQzNzQ5M2VjNFwvZGQ0ZHlqZi00NjBkMDIwNC04NmVhLTQ4MDQtYTlhYy04ZTZlNmQ4M2JhODAuanBnIn1dXSwiYXVkIjpbInVybjpzZXJ2aWNlOmZpbGUuZG93bmxvYWQiXX0.vzzDVDnKq853hOxZs0hFwEBqf9XxDnFZXgvOOH2Lfv4 Tags are also there. It just all has to be parsed out. It looks like a pretty good tag search to me. Would this be good for everyone else? I've messed a little with the parsing in the downloaders, but I know there are people here a 1000x better than me at it. Would anyone be willing to write a downloader for this in Hydrus? It would be MUCH appreciated! Thanks!
>>19218 Should that use case already be covered by looking up children tags of the series tag?
>>19220 that's not a tag search. it looks exactly the same as just searching "frogs" on deviantart normally - it's just always the most popular images. wouldn't a parser for https://www.deviantart.com/tag/frogs?order=most-recent be more useful instead?
>>19222 Lol, I don't know. The place is hard to work with. I still need to play with it more to figure out how it's all working. So, I guess I'm still not sure whether an actual tag search would work or not then.
>>19223 I was looking at this some more after I wrote the first post. I think you have to say #frog in order to get it to search for the frog tag itself.
>>19223 >>19224 Yep, entering https://www.deviantart.com/search?q=%23frog gives much better results.
>>19224 >>19225 I wasn't originally thinking about searching for frogs, but I got to admit, they have some pretty cool pics!
>>19221 Unless i've missed something that still requires me to close Manage Files and open Manage Tag Parents and then go back. Isnt the whole point of the tag suggestions to make tagging go faster?
>>19227 When you right click a tag there's a submenu for siblings and parents.
(84.04 KB 827x337 hydrusparent.png)

>>19228 This one? Thats not the same thing since it just shows me what parents it has, which i already know. I want to quickly find the name of a character that i cant remember the name of by looking for related tags to a series or title i know that character belongs to. As i said in my first example, i did not remember the name of Osana Najimi but i know they're from the series Komi-san so i added the series:komi-san tag and did a related tags search for komi-san but Osana does not show up because there are a bunch of other tags clogging up the list since they're technically used more despite not being children tags of Komi-san while Osana Najimi is.
I haven't used Hydrus's internal downloaders since I have my own scripts, and I don't know how their access works so it may not be affected, but FYI Twitter is paywalling its entire API as of February 9th.
>>19229 >so i added the series:komi-san tag and did a related tags search for komi-san but Osana does not show up If you had just looked at the child tags for the series in the dropdown menu, you would have seen a list of the characters in the series though, and I think that's a much faster simpler step.
>>19225 I think I got the xml working for search through the backend. This code appears to give the same urls as the normal #search code, but in parsable xml. https://backend.deviantart.com/rss.xml?q=%23frogs Normal search code would have been - https://www.deviantart.com/search?q=%23frogs
>>19230 Yeah it sucks. The access method Hydrus uses for gallery and subscription has already been shut down afaik. Nitter will probably be killed on the 9th. Maybe even Hydrus url download idk. What scripts do you use? Mind sharing?
>>19196 Thanks--I will play with apnggasm and see if I can detect and render this stuff better. I love these sorts of things, especially in the imageboard context, so if I can have native clean support, that'd be great. Although I hate doing this stuff normally, I actually had fun poking around in the apng spec and writing chunk parsers before. >I've read that ugoira can be a really awkward format Don't even get me started. A fucking zip of jpegs with frame timings stored in external javascript. I used to moan about the japs for the design, but someone told me it was a westerner who actually came up with it. Maybe it makes sense in some universe. The dynamic framerate explanation makes sense. I know there are some mickey-mouse ways to get webm to simulate dynamic framerate, so our best hope here is that as hydrus gets newer versions of mpv (and maybe as ffmpeg parsing tech improves), things will just line up better here. I had a master plan to convert ugoiras to apngs at one point, but then I discovered that most ugoiras are just jpegs rather than pngs, so they really are bringing nothing technically interesting to the table beyond the dynamic frame times. It really shows up the limited file format support we as the internet community have for 'better colours than gif, less heavy than mp4'. IMO animated webp could have been the chosen one, but that's fucked for other reasons I don't understand (ffmpeg and some other programs can encode animated webp, but not decode. some licencing issues I presume). Our answer, as a community, is probably to render to ~60fps webm and when we get low/variable framerate ugoira we just duplicate the same frame over and over and let the natural vp9 compression figure it out. I don't care if my webms are +20% size as long as they look correct. >>19199 >>19200 Yeah, shame. I feel sorry for the younger zoomers who are going to think that phone-mode websites are what the internet has always been. Just type in a word and get a spam of the top twenty 'newest' 'relevant' 'popular' results that then immediately refreshes for another set, and never have a 'filter by' dropdown where you can actually pull some levers on this stuff and shape a pool of fixed results. >>19201 Thank you for this report. I am sorry for the trouble, I had hoped we were all clear on everything. I will check my legacy-supporting code, but I felt good that we had the solution here. The guy who makes HC is going to update to the newer API standards in the near future, so I hope all the headaches will be clear soon. Just in case, can you make sure your HC is the newest version? I know he built in support for the newer tech a couple months ago, knowing I would be moving up.
>>19233 >Nitter will probably be killed on the 9th. https://github.com/zedeus/nitter/issues/783#issuecomment-1413736423 According to its developer it should be fine. There's always scraping HTML even if they completely remove the API.
>>19199 >>19200 >>19234 >>19232 I think I've got a better search for Deviantart on the backend, changing it a little, and using the # symbol (%23) in front of the tag to force it to look for that specific tag. It returns xml (html?) code, so it can be parsed. It looks like the same URL's that are returned in the normal front end search in Deviantart. Would this be useable in Hydrus to get URLs off of a tag search? Also, I misspoke on the tags. They seem to be on the URL page itself.
>>19236 i.e normal front end search that is the actual tag search, not the "popular" crap that the regular api search was returning.
>>19202 Yeah, I added JSON support a month or two ago, along with that help it as all new. I'd like to add XML support soon, too, since we have decent html parsing tech. >>19203 Damn, we may have hit the limits of my expertise here. Qt styling runs on voodoo duct tape, and the documentation is not great. I think someone was telling me that it is a priority of Qt 7 to overhaul the whole thing, but I guess we'll see. That 'can switch off fusion, but not back to it' is an issue I have seen elsewhere, particularly on Linux machines that have a C++ Qt installed system wide. Basically, to make things even more complicated, we are using python-wrapped Qt, and when that boots, in some cases it seems able to pull from a system-wide C++ style (the idea being you could set a style for all your Qt apps across your computer), but it can only load that default style in the boot-up phase. Once Qt is booted, I cannot command it to ever return to that style since we are too-pythonic at that point??? At least that's what I figured was going on. As the error dialog says, you should be able to restart the program with it set to 'default' and it'll work again though. I couldn't find anywhere in the docs for an explicit placeholder text colour, so yeah I think it might be the style computes it manually. It is certainly at the C++ level, out of my hands, and may be OS specific too. >Could this have something to do with the problem? Yeah, it may be. Maybe that system fusion is different than the python fusion, if that exists? Could even be a different version of Qt, and it somehow loads correct. Or I may just be completely mistaken on how all this works. I don't think I've said yet, but for all 'this shit is crazy' Qt-on-Linux problems, a great solution is usually to run from source. You are then avoiding the additional wrapper layer of your python being frozen by another Ubuntu computer, and your system files all line up just a bit better as a result. If you are running my built release, you might want to try it out. It is easy now, here: https://hydrusnetwork.github.io/hydrus/running_from_source.html >>19205 >>19207 Not sure. It is number 3 here: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc So I'd tentatively say within 24 months. Can't promise anything. I won't get into it too much, but the main block to the work is that hydrus currently assumes it can determine filetype from content alone, and arbitrary file (and text file, a similar problem) support breaks that assumption, so a ton of import, maintenance, and recovery code and UI needs to be improved first. >>19206 >>19210 >is there an easy way to get the tags/URLs for the files I've downloaded since then? Not simple ways, unless you get pretty much everything through subscriptions or watchers--then you can usually just let your backup catch up on the intervening time and file imports naturally. You can scour a newer 'client_files' directory for files to import, but it is a bit of work. Let me know if you get to that point. When you get the freezes and other weirdness, can you run help->debug->profiling? There's a little help item in that submenu explaining how it works. Make some profiles and send them to me however, and I can have a look at what is lagging here. Sorry for the trouble! (also check in the bottom-right corner of the main UI--it is saying read/write locked a whole bunch? That means the database is busy doing work, and a common cause of lag like yours)
>>19208 Not really, I'm afraid. Clients generally auto-accept everything they upload. If you have a friend who uses hydrus and the PTR, you can ask if they see a particular sibling or whatever after a few days (enough time for jannies to catch up on their queues and for your friend to sync with the appropriate recent update). Note that tag uploads are always approved, only tag delete petitions go before human eyes. Again, if a janitor disagrees with the delete request, you don't usually know--your client deletes them anyway. Were I to write the PTR again, I'd have better feedback on this. Maybe there's a future version that communicates things better, I'm still thinking about it. >>19211 I'm afraid they have rather aggressive blocks that seem to apply to people or regions not quite at random, but in a way I don't really understand. Used to be powered by Cloudflare, but I think they have in-house or other anti-automated-downloader tech too. They are always short on bandwidth, is the reason, so if you can use a different site, I recommend it. I don't think v514 changed anything with how it accesses sank, so I think you were hit by bad luck. You might like to try clearing your sank cookies under network->data->review session cookies, and/or explore Hydrus Companion, which may help you synchronise your logged-in browser cookies to hydrus, and may grant you access. >>19213 >>19214 >>19218 Yeah, I think the next step here is to tune the search better. The search works fast, but I suspect that 'skirt' and 'ribbon' and other unnamespaced tags generally provide misc errata and should have far less weight than 'character' and 'series'. Since you can now get suggestions from just a selection of tags on the right, I am now interested in feedback on this idea. Please, all who are interested in related tags, try doing searches on different subset selections of tags and tell me what is useful, what is not, what is ranked good/bad, what is spammed too much? The original implementation a user sent me here had really nice lookup relations where he, say, said 'creator tags can only provide series and character tags' and so on, for all the main namespaces, and unnamespaced couldn't provide suggestions at all. I skipped that since it was technically bulky and I know people would differ on what was good, but as we develop this system, maybe I add an options panel. I'd rather keep things simple, but we'll see. Maybe I can just decrement the unnamespaced search tags' suggestion results and the ranking is improved. Also, atm it includes the top 100 results that are >=4% coincident with the search tags. Other implementations tend to use higher thresholds, so I think I will add a simple option for this, and if people want to try 25% coincident (i.e. the suggestion tag is on more than one in four files with the search tag), they can, and we can figure out a better, less spammy default. I also got really pissed this week by bad siblings and already-implied parents showing up too much. The technical answer to this is a giant pain in the ass, but I'm going to have to do something I think. I also broke 'do not show related tags' when the layout is side-by-side, FUCK
>>19232 >https://backend.deviantart.com/rss.xml?q=%23frogs this is still just "popular" results. again, if you want an actual tag search, why not make a parser for https://www.deviantart.com/tag/frogs?order=most-recent
>>19219 It is saved in import folders, but no, I am sorry that the current 'tweak it and then share it' situation is bad atm. I can and will write the import/export/duplicate buttons you see in the parsing system for easy import/export, and I want a 'favourites system' so you can share and load up good presents, it just needs time. >>19230 >>19233 Yep, our downloader, which used an undocumented internal API, broke just recently. I'm glad nitter have their clever situation figured out--I guess they were doing kind of what we were, but far more advanced. >>19235 Unfortunately there's no HTML to parse--twitter is a completely modern phone-friendly site where the page you download is just an empty canvas and a ton of javascript. The javascript dynamically creates your HTML DOM and populates it with json and other bullshit pulled from a hundred subsequent requests. That's how the 'infinite load' works when you scroll down on a feed. We can hook into these things sometimes, but developers regularly run 'obfuscate this code' routines on this stuff, and they have been tokenising and server-siding a bunch of the access so sometimes you are looking at your page requesting a file called like 'Dffksitehuorceh2oeuio', and inside is a bunch of compressed obfuscated json. Almost all mainstream websites are going this way, like the DA conversation going on. It is sad, but inevitable as they have corporatised and focused on money as their primary concern. OAuth allows them to control who can do what, so it is critical for them to sunset their old open APIs, which were written by guys like us, and move to the consultant-driven tech. In an era where bandwidth and server time is cheaper than we could have ever dreamed of in the 90s, that's what they are focusing on, lol. Best answer, imo, is to focus on friendlier tech and organisations. Going forward, we cannot expect twitter to be a place to upload and manage art for any kind of archive workflow. It is a place to get likes and attention on new content. Use it as a place to find good artists, and then track down their backlog on boorus using saucenao and similar. >>19236 >>19237 Go for it--if you can figure out the old tag search in a downloader, that'd be great. I'm pretty sure the hydrus html parser can handle xml.
>>19240 Because this gives you the actual page with pics, etc. I don't know how to get Hydrus to see the underlying code to parse it. I've only messed a little with the parsers, making some tweaks, etc, but that's about it.
>>19242 press inspect element bro
>>19240 >>19243 Yeah, but if I give Hydrus that URL, can it see the underlying code?
>>19244 I'm guessing it will just see the code itself?
It seems that hydrus will crash if I try to play an audio file (tested with mp3, m4a, and matroska audio) when I have no audio devices enabled on my computer. In the log I see "[fatal] ao/openal: could not open device". It's a little weird though, because sometimes there's no crash, it just doesn't play and the scanbar is broken. The fatal error still gets logged, but I'm able to browse normally, at least until I roll the dice again by trying to play another audio file. In these cases there's often some additional messages in the log: "QBackingStore::endPaint() called with active painter; did you forget to destroy it or call QPainter::end() on it?" and "QPaintDevice: Cannot destroy paint device that is being painted". Also, I swear it's happened before with video files that have audio, but I couldn't get it to happen in my testing just now. Is this something you can fix or is this an mpv or ffmpeg issue?
>>19234 >Just in case, can you make sure your HC is the newest version? I am using an old version on firefox, but I checked with the newest version on ungoogled chromium and had the same problem.
>>19235 Omg that's great, thanks for letting me know anon!
>>19238 >Yeah, it may be. Maybe that system fusion is different than the python fusion, if that exists? I'm using KDE plasma as my current desktop environment. That currently uses Qt5 so there is a system Qt here. >I don't think I've said yet, but for all 'this shit is crazy' Qt-on-Linux problems, a great solution is usually to run from source. Actually, if I remember correctly, this problem started for me when I switched to running from source. Back when I was using the linux binaries, I don't think I had this problem. anyway. It seems like this problem is caused by the underlying Qt library itself rather than hydrus, so there's probably nothing you can do about it. The only other thing I can think of that might be useful is that I use another open-source Qt6 application called Anki as well. The placeholder text in textboxes work fine there. In this case, I'm not running it from source like hydrus but instead downloading the program straight from the website and using it like that (like how I used to use hydrus) so it's not exactly the same, but it's kinda close.
>>19246 Oh, I should probably mention that I'm on Windows.
>>19249 You could try running from source without the venv. I've been doing that for a few months now, no dependency problems (yet) and my styles still work (arch btw)
>>19092 Just noticed that this stopped working. Luckily it was a simple fix, they just moved the script tags around.
While I was fixing it, I had a thought: it would be nice if downloader components/content parsers had optional comment sections. Then I could describe what each part of the parser does, which may make it easier for other people to debug in the future.
>>19253 >which may make it easier for other people to debug in the future Sounds like you're implying I can figure out how a downloader I made a year ago actually works when its site updates.
I had another light week. I added some tuning options for the new 'related tags' search and cleared a handful of misc small work. The release should be as normal tomorrow. >>19252 Thank you, this is folded in for tomorrow!
https://www.youtube.com/watch?v=d1zDNBIswek windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v516/Hydrus.Network.516.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v516/Hydrus.Network.516.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v516/Hydrus.Network.516.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v516/Hydrus.Network.516.-.Linux.-.Executable.tar.gz I had a good week that was unfortunately short again. There's some options to tune the new 'related tags' search and some misc little work. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html related tags I have added three new settings to options->tag suggestions for related tags. They are fairly advanced and deal with the technical details of the search, so I only recommend poking around if you have been enjoying this system and want to change how it works. First off is a 'concurrence threshold'. This is a percentage throttle that determines how 'related' a suggestion tag needs to be to its search tag to be included. The higher you make it, the fewer--but more relevant--suggestion tags you will get. Next are two controls that boost or reduce the power of different namespaces in the search. If you want the character tags already on the file to do most of the searching, or if you want to see creator tags most of all, you can alter the weights here. You can also easily set a zero weight to say 'I don't want to see any filename: tags suggested' and so on. I have set them up with some reasonable defaults, so tweak what I've put in and see what you like. I also fixed a stupid bug where you couldn't hide related tags while in side-by-side mode. Sorry for the trouble! misc APNGs that play only once now do that! The new sidecar system now has import/export/duplicate buttons, so you can save and transfer your work! A favourites system will come in the future too, I'm sure. The Client API has some more 'king' info in the 'get_file_relationships' command. next week The big IRL thing that I had to do is now behind me. I've got about six more weeks of lighter 'hydev has to do some family responsibility stuff', and then I am back to normal schedule. Next week I would like to continue on the small jobs, just to catch up on things that I have recently missed.
>>19177 >says its not hard >begins to lay out an annoying workflow
>>19171 looked through a few menus and didn't see the option to search for tags for an image. is this a feature of using the public tag repository or something? never fiddled with it befor
>>19257 Lol! Well, I guess I shouldn't have said "not hard", but it's not that hard. At least I gave you the map how to do it.
(109.21 KB 948x660 Capture_48486.png)

>>19256 >I have added three new settings to options->tag suggestions for related tags. So far so good. It is a delight to tag now. Thanks.
>>19238 > I don't think I've said yet, but for all 'this shit is crazy' Qt-on-Linux problems, a great solution is usually to run from source. Running from source gets me the issue of having python 3.11 and PySide6 6.3.2 requiring python <3.11 Since I don't really feel like installing python 3.10 via apt I've kinda just given up and started waiting for some upstream updates to qtpy to resolve the middle mouse issue thing, but I've done some Qt work myself (on a limited level for some academic projects) so I understand the pain of Qt being a bitch sometimes.
>>19261 holy shit, seconds after writing this I realized that while I had tried changing PySide6 in the reqs to expect 6.4.2, I had neglected to check if qtpy was set to an older version that wouldn't have upstream fixes It is. I changed it to the current version. It runs now. I have waited weeks and only realized this after kvetching to someone else. I fucking hate how this always happens when doing anything IT-related. Fuck.
>>19258 i assume he means that once you've imported the images into hydrus already, you can give the url to hydrus and it won't need to download the image itself because it recognizes the image is already in the database by its hash and it will just download tags.
Does anyone know of programs like hydrus but it leaves the files in their original location? Pretty sure I've seen them mentioned in this thread before but I couldn't find them at a quick glance.
>>19264 gallery-dl hydownloader Only two I've seen mentioned a lot.
is the login script for TBIB supposed to be working? i keep getting 403 denied errors on hydrus 494. anyone running into this issue on the newer versions?
>>19246 >>19250 Hmm, I am not totally sure. I'm sure part of it is a 'me issue' in that I'm probably not handling the mpv 'no audio devices' error properly, but it is probably also a them/conf issue since that is where it is most likely fixed. The "QBackingStore::endPaint()" stuff is Qt being interrupted while it is in a delicate GPU-only drawing phase and is a common cause of crashing. Sounds like mpv is raising errors when Qt asks it to draw to screen, but most of that happens out of my hands, and I have to say this is all duct-taped together and has always been unstable, so I can't promise much from my end. Check the 'mpv.conf' in your 'install_dir/db' directory. There's a 'help my mpv crashes with WASAPI or ASIO audio.txt' file too, which has a couple lines to try for weird audio situations. 'audio-fallback-to-null=yes' sounds promising. Now, those lines were for a Linux problem, but maybe one or both of them will work for you too. If you feel brave, you can also try ctrl+f'ing this monster reference document: https://mpv.io/manual/master/ Stupid idea: is hydrus set to mute audio? If the crash doesn't occur when hydrus is set to global mute, then check out the 'mute/unmute global audio' shortcut action under file->shortcuts->global. Let me know if you discover a solution! >>19247 Damn, sorry for the trouble. With luck there will be a new version of HC soonish that will fix it up. >>19249 >Back when I was using the linux binaries, I don't think I had this problem. Damn, so in fact the reverse happened--running from source better integrated you into your system libraries, and Qt was thus somehow able to import this stuff half-baked. The built release has its own .so files and stuff and must not be able to cross the barrier. Ok, sorry for the trouble. I'll keep an eye out, but it sounds like this will magically fix itself somehow in the future with a Qt update. Might be they add a specific QSS tag name for the placeholder text, or they update the python style code to calculate the colour better, or somehow your OS Qt gets updated and that propagates down to hydrus. Let me know how you get on! >>19253 >>19254 Thanks, great idea. Just the name is not enough, and the note would allow you to hold a snippet of example JSON and stuff too.
>>19260 Yeah I've been tagging more myself with this. I hate typing tags, but now I can easily double-click to add. There's technical bullshit to get around, but now I really want a sibling- and parent-filter. >>19261 >>19262 Hey, fantastic. This bullshit is always the way when setting up pip environments and all that shit. The number of times I've had to bang my head against the wrong version of pyinstaller or openvc, no worries. I'll write down these numbers myself and note it in the setup scripts. I'm sure we'll stumble across more py 3.11 issues in future. >>19264 For a corporate program, I used to use ACDSee a million years ago. The version back in 2010 used to allow basic tagging and searching. >>19266 Sorry I have not heard anything about it in years. It is quite possible it has broken--I'm afraid many of my old login scripts have, often with 403, when the sites add captcha to the login. Hydrus Companion is usually the best solution, to copy your browser cookies across to hydrus.
I accidentally perma-deleted a few of my files. No problem, they're still in the OS trash folder and I can recover the files. However I would like to edit the imported time, so that I can put them back in the order they were originally. I can get the SHA hash of a file, but I'd like some advice for finding the hash ID and knowing which timestamp/s I'll have to modify in SQLite. Thanks!
[cont] >>19269 Figured it out. Had to edit 3 current_files tables (not sure what the difference is). Hash_id can be checked on the GUI by right-clicking a file and selecting share->copy->file_id (#)
>>19267 >If the crash doesn't occur when hydrus is set to global mute Tested, it still does. >'audio-fallback-to-null=yes' sounds promising That fixed it, thanks! It plays normally without crashing.
Ugoira and animated webp support when?
A friend and I are currently a bit confused about extension-less files mingling between image files in the client_files folder, that all start with 0x78DA when viewed in a hex editor. Are these some kind of leftover files created by downloaders?
Imported a 4plebs downloaded from the repository >picrelated I try to download a thread from 4plebs but it doesn't work. Do i have to put the URL else where or something?
I have a bunch of pending tags but I don't want to commit them and risk the mouth breather that runs the PTR nuking them all because of one tag he doesn't like. Can I just map the pending PTR tags to my local tags without the current tags?
I'm a little confused. I want to import a b4k thread to get the filenames for images I already have, and the b4k parser already supports filename: namespaces, but when I add it to a download page it doesn't add the filenames. I checked the parser and it parses the filenames correctly, I even tried doing the expensive "don't check for matching hashes/urls" thing, but still no new filename: tags. Any clues?
>>19273 I took a look at one in my file directory, it's a "zlib compressed data" file. After decompressing it, it's a massive one line ascii text file (9.6 million characters!). It seems to have client ids, booru: and filename: namespaced tags in it. For example: [28146786, "filename:waiting_for_the_client.png"] Not sure the use case, but it seems intentional. If you're on Linux you can use the "file" command to check the magic numbers (what kind of filetype a certain file is).
>>19277 perfect! thanks, that worked. as long as it's part of hydrus' regular operation i'm fine with that.
Hi. I'm trying to transfer media between clients/dbs but I want to preserve all data (e.g. ratings, urls, tags, favorited). The 516 update mentioned 'transfer your work' but I'm not sure its referring to what I want to do. At most I've been able to transfer tags and urls seperately, but not ratings and favorited information. Is what I wnat to do currently possible?
I had another light week. There's some misc fixes, symlinks for export folders (thanks to a user!), a new way to show image transparency (which will particularly help in the duplicate filter), and I finally removed superfluous sibling and parent suggestions from 'related tags'. The release should be as normal tomorrow.
I'm trying to make a donwloader for imginn.com but I keep getting 503 errors when trying to connect. It seems to be a cloudflare error and editing the headers doesn't fix it. Any idea why?
https://www.youtube.com/watch?v=q4veLQfXFLI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v517/Hydrus.Network.517.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v517/Hydrus.Network.517.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v517/Hydrus.Network.517.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v517/Hydrus.Network.517.-.Linux.-.Executable.tar.gz I had another short week, but I'm generally happy with the work, which was a mix of different things. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Thanks to a user's work, Export Folders now support exporting to symlinks! If you are on Windows, you may need to run hydrus in Admin mode for any symlink exporting to work (or, on Windows Pro, I believe you can mess around with Group Policy Editor to give normal user accounts the permission). The newly upgraded 'Related Tags' shouldn't suggest superfluous siblings or parents any more! I'm pretty sure I got the logic right here, but let me know if it is still suggesting the wrong things somewhere. Hydrus can now show a checkerboard pattern behind a transparent image instead of the normal background colour. There's two new checkboxes, in options->media and ->duplicates, to control it--by default I have set it just to show in the duplicate filter, where it is useful to have transparency pop when you are comparing two similar files. next week Yet again, I'm a little busy with IRL again next week, but otherwise should be normal. I'll do more work like this! Thanks for your patience!
Thanks man! We really appreciate your work!
(32.61 KB 540x253 ice cream.jpg)

I really like tag suggestions. Should help a lot more than recently used. I just finished cleaning up and refining a bunch of my personal parent-child tag relationships. I've made it such that this chain of implications is followed, >kemono:catgirl -> kemono:furry girl > kemono:furry & kemono:kemono girl > furry level:kemono And mirrored this structure for the kemonomimi:* and feral:* namespaces such that, >Kemonomimi:dragonboy > kemonomimi:reptile boy > kemonomimi:reptile & kemonomimi:kemonomimi boy > furry level:kemonomimi I also did three larges passes over everything in the character:* and ip:* namespaces to clean up a bunch of un-childed characters and add two new categories of parent focused namespaces. Firstly, a set of namespaces, developer:*, animation studio:*, and publisher:* which I've all made red just like the creator:* namespace. Secondly, genre:*, which is often used a bit loosely, but is effective enough and especially useful for specific videogame genres like run and gun, fighting, platformer, et cetera. I'm feeling very accomplished with my autism as of late, though I still have many months of tagging until I am unboned, and so decided to skim this and the last thread so I could collect and condense my previous suggestions for Hydev. Half to remind him, half so I don't forget what I've already said and waste my and others' time repeating them, and a third half to add a couple new ones. I just updated Hydrus to the latest version so I could double check that nothing was implemented that I missed. I've ordered them by what I would guess is difficulty of implementation and will probably post them once per thread from now on, and drop any that he says are outright impossible even long term like mutex. Parent display Modify either one of the experimental dropdown options in the tag manager, "switch to 'mulitple media views' tag display" or "switch to 'display tags' tag display" so it is a permanent toggle, rather than reverting upon closing the tag manager. I do not know what the difference between these two options are. This already displays all parent tags that are applied through the parent child relationship with other tags on a file as regular tags in the tag manager, but lacks permanence. Add decoraters to the parent tags denoting they are only present as a parent tag if they are not already hard tagged on a file. Sibling display An option to display siblings as ideal tags in the tag manager with decorators denoting they are not actually the ideal tag. Similar to the option above, however this would hide the unideal sibling tag. archive thumbnails When a .zip, .rar, .7z, etc. file contains images, the first image should be used as a thumbnail. This makes browsing a collection of manga/comics or hentai much more feasible as once you narrow down the search pool to a reasonable amount you can still browse the files by eye and decide to look at something based on often untaggable art styles or visual memory. This can be simulated by turning an archive into a .png archive with the cover page as the png image, but that's of course quite tedious and impractical. Grouping namespaces By color You've said this is on the back burner. Hopefully it can be done one day, as it will vastly improve the presentation of tags in the tag manager and increase the speed of viewing them. "Non-existant" parent tag autocomplete in tag/parent/sibling managers When creating a new parent tag, unless that parent tag is manually tagged on some file, it does not show up in the autocomplete window for either the tag/parent/sibling managers. I'd like such cases to show up in the managers' autocomplete with the number (0) beside them. Effective tag count display option for parents and siblings in tag/parent/sibling managers Given you can have a wealth of files tagged with unideal siblings and child tags that effectively apply the ideal and parent tags, it would make sense to have an option for the display of a tag's count, the number in parenthesis beside the tag, to display the number of files that have the tag effectively applied to it, rather than just the number of files hard tagged with the tag. This may help get better autocomplete results for some use cases since this number determines the order in which they display. If possible it'd be nice for this idea to be expanded to the meta information under "review sevices" -> "tags" so that you can see a seperate number for mappings that shows how many mappings there are if soft-tagged parents were to be treated like hard tags, and redundant unideal/ideal siblings tagged on the same file are not counted. With this you can get an accurate number of how many tags are effectively applied to your files on average, rather just how many are hard tagged on average. I really like this sort of info and it feeds directly into my autism. Namespace siblings As an alternative solution to editing namespaces, but also good for when the namespaces taken from boorus and such aren't the ones you want to use. One could make a custom parser to automatically change the namespace of incoming tags, but this is done on a per site basis as I've come to understand and I believe is more technical, while siblinging namespaces would apply across a wide variety of sites that tend to have uniformity in tagging and would be more user friendly. Editing tags/namespaces i.e. hard replace siblings It's been said many times that this is a difficult task, and that siblings will take care of most use cases, I still believe in the utility it has for altering typos and fixing namespaces which can't be sibling'd. Sibling tags still cause the ideal tag to show up in searchs, so typo siblings will negatively effect a search if the typo is similar to other tags. This also expedites the process of eliminating typo tags that may already have been caught up in a web of child-parent relationships so they don't have to be reapplied. >>>/hydrus/18934 >I often find files marked as safe, questionable, AND explicit! Mutex would be nice for this, but it's an easy fix without it just by searching for combinations of these three tags and eliminating one or more from any files that show up until there's no search results, but mutex would have a lot of use for other more complex relationships. Probably many years away though, if ever, given how complex it'd make things. >make sibling tag 1 out of bad tag 1, select all files with bad tag 1 and, in the tag manager, add sibling 1 and delete bad 1. This seems like a decent solution until tag editing is implemented, if it ever is, as last December Hydev said it was very complex and he was still thinking about it.
In case anyone has had trouble with Danbooru's Cloudfare protection in the last few days, see this post: https://danbooru.donmai.us/forum_posts/233627 They've blocked all User-Agents that try to impersonate browsers, and the admins are recommending that people use a User-Agent with some unique string such as a link to their user profile.
I'm looking to add some custom CA files to hydrus (and/or Python, I guess); I'm specifically having issues with motherless.com 's CDN, which gets me the following error: hydrus.core.HydrusExceptions.ConnectionException: Problem with SSL: HTTPSConnectionPool(host='cdn5-videos.motherlessmedia.com', port=443): Max retries exceeded with url: /videos/CODEHERE-720p.mp4 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))) Indeed, trying with curl would get me the same error, so I manually added their shitty gogetssl CA to my trusted certs (yes, I know, coomer brain). This resolved the issue with curl, but hasn't with Hydrus, even after a restart. As I'm running from source on Linux, I made sure that my CA path was the correct one with, after sourcing the venv, with python3 -c "import ssl; print(ssl.get_default_verify_paths())" This reported /etc/pki/tls/cert.pem, which is what I expect (and what symlinks to what curl uses, too). I also made sure that this is the exact same python version Hydrus is started with. Am I missing something here? Some weird Python cache / additional location I wouldn't be aware of? Thank you!
>>19269 >>19270 Well done, that's the correct answer! I'll be adding some UI to manually edit all file timestamps soon, mostly as part of the more recent modified timestamp extension. The three file domains are because of a couple of umbrella domains I use, 'all local files' and 'all my files', which represent all the files on disk and the union of your 'my files' domains (if you have multiple) respectively. Just technical stuff, not a big deal, but you want the timestamps to all line up >>19271 Awesome! >>19272 I hope Ugoira will come when I eventually get CBR/CBZ tech going. I don't have any zip-reading tech working in hydrus yet, so I don't have easy means to read an Ugoira, which is mostly just a zip of jpegs. There is also some javascript/JSON frame timing bullshit to handle too. It has been wanted for a long time, so I want to get it done, but it hasn't so far. Animated webp will have to wait for whatever the hell patent or licencing issue is plaguing the webp library that many media libraries (I think all of ffmpeg, PIL, imagemagick) use, which is that the library can encode animated webps, but it can't decode. I'm assuming the ego dispute or technical problem causing this situation will be fixed in the future and it'll all suddenly work. When a library can give me a list of bitmaps and some frame timing data, I should be able to flick the switch and get animated webp going in a week. >>19273 >>19277 >>19278 Yeah these are the PTR's update files. Just some zipped up json. One set, the 'definitions' are lists of ( id, tag/hash ) rows, and the 'content' will be rows of ( tag_id, hash_id ) kind of rows. I hang on to them so if we ever need to do some reprocessing, you don't need to talk to the PTR again. >>19274 When you say 'it doesn't work', can you say a bit more? Does it try to put it in a watcher page, but it puts an error in the 'notes' column of the 'checker log'? Or does it try to download the URL as if it were a file in the 'urls downloader'? Check both network->downloader components->url classes and url class links. If the url class is not in the 'url classes' dialog, then just drag and drop the png on there. Then check if the url class is linked to an appropriate parser under 'url class links'. When I look here, https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/4plebs , it looks like there are several more things here. If you did not, I recommend you download all the pngs there and drag and drop them on Lain under network->downloaders->import downloader. She should sort you out.
a cool and small feature I'd like to see for the tag suggestions would be a way to alt+enter/click on a suggestion to mark it as deleted instead of adding it, and then also make it so that deleted tags don't get suggested for a file. it would just be a nice way to go down the list of suggestions 1 by 1 to see which ones apply and which don't without manually scanning around a bunch. this would particularly be pretty useful when there's a lot of suggestions.
>>19275 I'm not sure if this will do exactly what you want, especially as I don't have a 'pending only' in the filter, but most of the time if you want to move tags from one service to another, you want tags->migrate tags. I will add 'pending' to pic related dropdown menu, should be easy. You also may have already set this up, but if you want to not upload to the PTR in future, or you want to have a system where you upload and also make a local copy on 'my tags' or a new 'my ptr tags' service or something, then just hit up your default tag import options and parse to the different or both locations. >>19276 Check you default tag import options under network->downloaders->manage default import options and then hitting the options for 'watchable' urls. I have set thread watchers to not get any tags by default since most thread parsers only offer 'filename:' tags and most users don't want them. Compare the settings for 'watchable urls' to 'post urls' and you'll see how to set it all up (just click 'get tags' on the service you want, most likely). If you just want it for the b4k domain, then you can set specific tag import options just for that domain in the list of url classes on that dialog. >>19279 Yeah, sorry, I haven't got around to getting this done. There's no easy way to transfer ratings or archive/inbox yet. If you are willing to do some database work, we may be able to figure something out doing a manual copy in SQL. Otherwise you might need to wait for me to figure out ratings and inbox/archive in sidecars and/or a proper 'merge this client into this client' tool. I wanted to write a client merger when I added multiple local file domains, but all of last year turned into a clusterfuck and I ran out of time on everything. If you want to try the manual transfer, you might want to email me, hydrus.admin@gmail.com , or hit me up on discord, https://discord.gg/wPHPCUZ ,so we can talk one-on-one, or we can do it here. Just outline basically what you want to transfer and I'll figure out a procedure for you to follow. >>19281 First solution for cloudflare is to try Hydrus Companion. Visit it in your browser and then use HC to copy cookies as well as user-agent header across. That fixes most problems. However, I have heard that cloudflare have new tech for a serious 'I'm under attack mode' (IUAM) that apparently remembers which tls ciphersuite you used to make the initial connection and get the cookies, and it includes that in your id fingerprint too. If this site is being protected that hard, hydrus doesn't have a solution yet. >>19283 Thanks! Keep on pushing!
>>19284 Thank you, this is well thought out. I'm glad you are having fun with all the parents and stuff. I have saved your main feature suggestions here. Nothing here is impossible, but you know I'm always short on time, so things are always getting put off. Worst of all of them in namespace siblings. I'd love them, but the math could be an utter nightmare. I've got it in the back of my head just to bullshit it with a simple spammy solution that'll get the job done sooner rather than the gigabrain algebraic one that tries to harpoon the white whale, but I'm still thinking about it. Just briefly on other things: On the technical side of things, I have different tag display 'contexts' for where things are stored or shown, and some controls, like taglists, are initialised with that value and so fetch tags from or perform math and display in different ways because of it. The display vs multiple media views is a weird one, slightly apples to oranges. The display contexts, for your info, are: storage - what tags actually are, as saved on disk. this is what you edit in manage tags dialog ideal - what tags should look like with all siblings and parents applied display - what siblings and parents are applied right now (might need more work to sync with ideal). this is the normal 'pretty' view of tags single views - display + the tag filter for the media viewer taglist multiple views - display + the tag filter for the thumbnail taglist Overall though I agree on more parent and sibling display options. manage tags is a nightmare to work with sometimes. Archive thumbs sounds great. I said in a post just above, I'll need some archive-reading tech infrastructure first, but should be doable. Group namespaces will need a big expansion to my taglists, but I want it personally too. I want [+] buttons too to hide them, and options for that. Shit, I have to figure out the (0) count suggestions again. It did work once, but then I broke it somehow. This is where we get into sibling/parent nightmare logic. Having 'combined' tag counts would also be neat. I'm planning a massive overhaul of my basic raw 'tag' object in future so that it knows its siblings and parents on load, rather than needing db hits for this stuff, which will grant a whole lot of tools here. Yeah, hard-replace tags tech would be nice. PTR janitors want this too. And yeah, adding a 'mutex' relationship is beyond me for now. Just the feel of negative logic as well, sounds like it would add a lot of fun problems. Maybe in future, when I feel more comfortable about a dozen different systems, but it may be the need for that sort of finicky tech is overridden by other systems working better anyway.
>>19279 >>19289 Thanks for the offer. I'll hit you up again if I get to a point I really need it. I'm not desperate to transfer the files right now but its definitely something cool I'd like to see at some point in the future. It might be better to consider not how to merge databases but how to export/import files with all their database entry info between databases/clients (which could be used to merge databases). If its even possible, dragging and dropping files between clients to do this would be the dream. Thanks for hydrus. It's amazing.
>>19286 WARNING, I AM NOT AN EXPERT AT THIS, AND DOUBLY SO FOR LINUX: I use 'requests' for all normal network stuff, and I'm pretty sure it uses 'certifi' as well as/instead of your system certs. It makes a 'certifi' directory on the built releases with a cacert.pem in it. As here: https://requests.readthedocs.io/en/latest/user/advanced/#ca-certificates https://certifiio.readthedocs.io/en/latest/ So, if there's a 'certifi' dir in your venv's site-packages, it might be it is using that. How you edit or add to that is beyond my knowledge, but you can, if you are ok editing my code, just plonk your own cert in my requests call, as here: https://requests.readthedocs.io/en/latest/user/advanced/#client-side-certificates This line is the master through which all normal hydrus network activity starts. It will take that 'cert' arg: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/networking/ClientNetworkingJobs.py#L793 Also, if you don't actually care about ssl and you just want this shit to work without an error, even just for a certain period, you can go crazy-mode and uncheck BUGFIX: verify regular https traffic' under options->connection''. But given how much you know about this, I assume you want it secure. EDIT: python -c "import certifi; print( certifi.where() )" in your venv should locate it for you. >>19288 Great idea! I absolutely wanted some way to 'filterise' this so you could say yes, yes, no, yes a bit better. Let's try something like this out.
Hi. So I can't seem to find where to remove the 32 MB gif limit. Where's the option?
>>19291 Yeah, I think ideally I want to get the Client API able to do all these sorts of metadata and then I'll make a framework so clients can talk to each other. I can then write whatever UI we want for whatever migration or remote search or whatever we can think of, like a window you drag and drop files onto to import them to a remote client with some metadata settings (e.g. 'copy all metadata, including import times') so this can be easy. We'll see, it'll be a lot of work, so maybe we'll limp along with sidecars. Should be doable to add ratings and inbox to them, and then I just need some timestamp editing tech. When this stuff does roll out, let me know what you think of it! >>19293 file import options. New importers will use the defaults under options->importing. Very old importers (years-old subscriptions) or importers where you have set and saved specific file import options will not use the defaults but will have their own set. Click on the 'import options' button on any downloader to see if it is using the defaults or its own specific set.
>>19292 Thanks for the help! certifi.where() told me that it indeed wasn't using my system's default; copying my system's cacert on top fixed my problem.
>>19293 Thanks!
>>19289 Unfortunately I already setup watchable urls to grab filenames, which is part of the reason I'm confused why it's not working.
Does anyone know about this issue? I've been getting this error a lot lately on multiple sites for all downloads. It doesn't seem to be url specific as I can try again later and it sometimes works. Problem with SSL: HTTPSConnectionPool(host='somesite.com', port=443): Max retries exceeded with url: /index.php?page=post&pid=0&s=list&tags=could_by_anything (Caused by SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')")))… (Copy note to see full error) Traceback (most recent call last): File "urllib3\contrib\pyopenssl.py", line 488, in wrap_socket File "OpenSSL\SSL.py", line 2075, in do_handshake File "OpenSSL\SSL.py", line 1719, in _raise_ssl_error OpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')
Is this the right place to ask some noobie questions? I read the whole getting started and advanced guide and I don't think I've seen anything about these there. 1. Is it possible to mass delete notes with specific name? When I select my files and right click > manage > notes, it only edits a single one. Also what about importing/exporting? I looked all over the import/export menus and only found support for tags and urls. 2. Is it possible to update import time? I use it as a secondary sort and I usually sort my stuff by creation date in explorer, which keeps images that are part of a set together most of the time, but I accidentally imported a bunch of my files with the autosort option on, which ruined the pseudo grouping and when I try to reimport, it won't update the times. I did a workaround and used the correct import sorting to add autonumbered tags along with namespace sorting, but I'd prefer if I didn't have to number every new file for the sorting to work properly. 3. Is there a way to autonumber a selection aside from reimporting? For example you'd select a bunch of files and it would auto number them in the current sort order into whatever namespace you set. Would be useful for numbering pages or image sequences. 4. What about namespace renaming? For example I autonumbered my imported files into a "date" namespace, but then I decided I'd prefer if it was called "order". Is export > import the only way?
I have a question: I've been willing to get into Hydrus for a while, but since I've already got like a million uncategorized files I fear it won't be doing me much good for the stuff I downloaded in the past. Especially since most pictures came from mass downloading threads, and are thusly saved with nondescript names. Is there any tool, from AIs to programs, that could scan all the files from a folder and appropriately tag them using information in the file itself OTHER THAN THE METADATA (meaning it would look at an image of a cat and will properly use a local Hydrus instance to tag that as "cat")?
>>19299 >What about namespace renaming? Not possible right now. Too difficult to implement quickly but is planned in the long term. >I autonumbered my imported files into a "date" namespace, but then I decided I'd prefer if it was called "order". Hydrus already keeps track of both the last modified date and the import date and you can sort a page of files by these. A date namespace tied to import time is totally redundant and unnecessary. >For example you'd select a bunch of files and it would auto number them in the current sort order into whatever namespace you set. Would be useful for numbering pages or image sequences. There's are functions called "pages" and "volumes", but I never bothered figuring them out and was last told that Hydrus has had longstanding issues with ordered sets of images. My personal solution, given the images already have alphanumerically ordered filenames which is the case for 99% of ordered images I imported, is to simply tag any set of ordered images with a specific "set:*" namespace tag. Then, when I search that set and pull up all images with the particular "set:*" tag, I can go select the option in the second image and the set will be properly ordered. Of course, you must make sure when importing files from your folders to select the options here in the third image, and select custom in the second image to add the "filename:*" namespace to the dropdown menu for convenience. Don't worry if you've already imported a ton of files without attaching their original filenames. You can reimport all the files without any deletion, select the filename tagging option, and Hydrus will recognize you already have the files while applying the filename tags to them. Further, if you're looking to order whole chapters of manga and such, I recommend not importing individual pages into Hydrus. Instead, zip them in an archive file format so you can just order the chapters, or even further the volumes, or just zip and tag the whole story at once. Then open the file with a archive comic viewer like CDisplayEx when you want to read the story. >>19300 >I've already got like a million uncategorized files Give everyone a number. 4th pic related to me about 6 months of dedicated manual tagging to achieve and I still have 5 or so months of manual tagging to go. >Is there any tool, from AIs to programs, that could scan all the files from a folder and appropriately tag them using information in the file itself OTHER THAN THE METADATA (meaning it would look at an image of a cat and will properly use a local Hydrus instance to tag that as "cat")? Can I assume by the fact that you don't want to use metadata and you say "local Hydrus instance" that you don't want to use the PTR? In that case, the answer is no. AI image recognition is what you're looking for, but it's only just now taking off and would still be wildly inaccurate in most cases. You're going to have to wait some years for it to reach more practical levels and then even longer for Hydev to do something like implement it into Hydrus for tagging under local services.
>>19301 >Hydrus already keeps track of both the last modified date and the import date and you can sort a page of files by these. A date namespace tied to import time is totally redundant and unnecessary. My problem is that the import times are different than the original creation times, and modify times may also be different. Reimporting doesn't seem to update the import times, so I guess my only option is to export everything > delete > reimport. But then, how do I export/import notes? The menus only offer tags and urls. >I can go select the option in the second image and the set will be properly ordered. I kind of already do that, it's just that not all of my files were properly named, I relied on creation (download) dates. I still use the filename namespace as the last sorting tag as kind of a last resort though. >>19300 >>19301 The wd1.4 tagger is pretty good for tagging anime at least, everyone is using that to tag their training datasets. Not sure if there's a standalone version though, so you will have to install the stable diffusion webui, as it's an extension to that. The results of the tagging are txt files containing tags separated by a comma and a space, so you could import those as sidecars. https://github.com/toriato/stable-diffusion-webui-wd14-tagger https://github.com/AUTOMATIC1111/stable-diffusion-webui
>>19212 >There's also a 'my files' vs 'all known files' on/off button to change the search domain ('new 1' vs 'new 2' in the test last week). I can't find that button. Where is it? I only see the quick, medium, and thorough buttons.
Quick question -- is there a way to easily convert selected tags into notes for certain files that have them? I get most of my files from social media sites, and most of mine are from before Hydrus introduced note parsing. It would be really nice if I could easily convert a namespaced 'tweet text' tag into a proper note without having to do it manually for thousands of files. Also -- will there eventually be a way to search through file notes as opposed to just tags? Would be the best of both worlds as I don't have to deal with note tags popping up in suggestions, yet I can still search them with a system prefix. Thanks in advance!
>>19285 so what are we supposed to change to get danbooru working again? I couldn't find any documentation on where to change the user-agent (unless I missed it). also, if the current user agent impersonates a browser, how is it that they can tell the difference between a real browser request and hydrus?
>>19300 >>19302 check out this implementation of using deepdanbooru with hydrus, letting you batch tag pictures and automatically sending it to hydrus https://gitgud.io/koto/hydrus-dd/-/tree/master/hydrus_dd admittedly wd14 tagger appears to be superior in terms of recognizing characters, but I don't believe anybody has implemented it for hydrus. I would give it a shot but i can't into code
>>19306 I'm too smooth-brained to figure out how to run hydrus_dd on Windows. Is there a guide anywhere for deciphering this tribal knowledge?
In the API, for tag browsing, their is a pagination cursor option. Anyone know what this is for? It doesn't appear to be a page number. More like a string of numbers and characters.
I ended up completely breaking my Hydrus install somehow. Shit was throwing up errors and the recovery options from "help my db is broke.txt" didn't work. I ended up installing a new version and manually went through the db files to extract my tags for each hash to a text file and re-imported everything to the new install. 99% of my files and tags seem to be intact but I had dozens of favorite searches in the old Hydrus install that I want to bring to the new one. Which db/table would I have to look in to find my saved searches?
>>19300 instead of using AI to tag things, a simpler way is to try and find your images already tagged on boorus. 1. get the md5's of your files. try googling something like "get list of image md5s in a folder", you'll probably find something. or if you know how to do some basic coding you could probably make it in python pretty quickly. basically, you just want a text file with an md5 on each line. 2. for each md5, create a url like this: https://gelbooru.com/index.php?page=post&s=list&tags=md5:[MD5 HASH HERE] so you now have a list of urls. for example: https://gelbooru.com/index.php?page=post&s=list&tags=md5:08363a8e69740434800f55167bbaeb36 this is simply checking for the image with that md5 on gelbooru. 3. create a downloader page in hydrus and give it all the urls. if your image is on the booru, hydrus will download it. gelbooru was just an example, you can use the md5 hash search on other boorus as well. this will get a good portion of your images, but probably not all of them. if you've been downloading from 4chan threads and stuff like that a lot of your images will have different md5's just because of people posting thumbnails or phones adding exif data or other things. but it'll probably be a decent portion.
>>19300 >>19310 oh and also obviously don't copy paste a million urls into hydrus all at once, it'll definitely crash. do it a few hundred at a time.
>>19305 >I couldn't find any documentation on where to change the user-agent network > data > manage http headers you can change the global one or add one for danbooru specifically
>>19310 >>19300 It's even simpler than that. You can just copy all your files' md5 hashes with right click > share > copy > hashes (I think you need to be in advanced mode), then just paste them in gallery downloader (make sure you turn on that option in the cog that says it will merge multiple pasted tags into one worker or something, also set tag import options not to ignore images you already own). Also go to file > options > downloaders and change the wait interval to 1 second or it will only try finding 1 image every 15 seconds. Still, you will most likely end with a large number of images it didn't find, in which case you can try some of the iqdb and saucenao searchers like hatate. But using ai tagging could be just faster and easier at that point, since you don't have to confirm you got correct matches.
can you add a way for users to give a list of tags that the tag suggestions won't suggest. there's some tags that don't make sense to be suggested, but they aren't entire namespaces so putting the namespace to 0% for suggestion power doesn't work.
Since I couldn't find a way to bulk remove notes, I tried looking into a more direct solution with the database, and I noticed that if you delete a note manually in the client, it will still keep the whole note inside the database (labels and notes tables in client.master.db), it just unlinks it from the file (file_notes table in client.db). I also tried all of the database cleaning tools that are in the client and the data still persisted. Is that intended or a bug?
I had an ok week. There's a mix of small fixes, the addition of hours:minutes to system:time calendar predicates, and a neat overhaul of how autocomplete tag search presents its pre-fetch results on slow searches. The release should be as normal tomorrow.
>>19292 >I absolutely wanted some way to 'filterise' this that's not even what I meant. I just meant a way to mark as deleted in the tag entering window. But actually that's a great idea! I could see a sort of tag "active/deleted" filter, that would just go down the list of (it should probably be thorough) tag suggestions and you choose to either add the tag, or mark it as deleted. And I imagine it would keep going until either you just stop it yourself or it runs out of suggestions for the file. I guess a question here would be: should it only run for the single file, or treat it more like batches in the duplicate filter, where it'll ask if you wanna continue to the next file in the current search, after? anyway yeah that sounds like something that could be really cool.
https://www.youtube.com/watch?v=lDWfgUjlEMk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v518/Hydrus.Network.518.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v518/Hydrus.Network.518.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v518/Hydrus.Network.518.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v518/Hydrus.Network.518.-.Linux.-.Executable.tar.gz I had an ok week. Tag autocomplete search should be a bit nicer to work with today! Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html tag autocomplete I rewrote the part of tag autocomplete search that handles showing you early results. The whole thing works better and is smoother. First off, every time you type something like 'cat' and the search is taking a while, you should now always get 'cat' and 'species:cat' and anything else that might match, with counts, and your selection will be preserved when the full results come in. Quick-entering a tag on both search pages and the manage tags dialog should be faster, easier, and more reliable. This is roughly how things used to work, a long time ago, but the various updates and new options I've added over the years made it patchy and buggy--it should be working again, and with some new quality of life improvements. Secondly, I've improved sibling lookup in the manage tags dialog. You should now always see siblings, on the pre-results above and the final results, even if the siblings have no count. I think we might want some options for handling how these display and rank (e.g. 'always put the ideal of what I typed at the top of the list'), so see what you think. Lastly, when you search for tags on a page of thumbnail results, you now get some pre-fetch results very quickly! The full results, which have extra sibling lookup data, will fill in a few seconds later, but, fingers crossed, there will be no more huge waits for anything on thumbnail tag lookups! misc The system:time predicates now support hh:mm, like '10:30pm', on the calendar search! Should be easy to cut a day in half now. I believe I have optimised some content update code. If you have a client with a large page of thumbnails, particularly if that page uses collect-by, let me know if your subscriptions and/or 'upload pending mappings' run a bit smoother. If you are an advanced Windows user, please check out the bottom of the changelog for an mpv test. I'd like to test out a new dll on a wide range of machines. next week My ongoing family IRL stuff is winding down. I'm still going to have extra responsibilities for another month or so, but it shouldn't be too bad. I'd like to have a medium-size job week, and I want to have some fun, so I'm going to try to make a 'click to increment' integer rating service.
>>19318 I used the new mpv-2.ddl. It works for me using the prebuilt Windows release. Videos play as they should.
(43.50 KB 414x469 teagan_ohayou.png)

>>19318 >First off, every time you type something like 'cat' and the search is taking a while, you should now always get 'cat' and 'species:cat' > Quick-entering a tag on both search pages and the manage tags dialog should be faster, easier, and more reliable Wonderful. No more accidental un-namespaced tags because my tag-fu is 2fast4hyrdus.
>>19319 +1 Using the new mpv-2 on Windows (Installer) without issues.
Does anyone have a suggestion for this problem with import folders? Is it possible to run a shell script just before the automatic import begins? Or instead, to set the import to always happen at a specific time, so I can use something like cron for the script? I'm using bdfr, with an external script to keep track of what I've already downloaded, but the script looks at what files are currently downloaded. So, if bdfr downloads a file, the script should be ran before hydrus deletes it. Ideally, is should be ran just after hydrus has decided on a list of files to import, before the import begins. It doesn't really matter if some fall through the cracks, though; they will just be re-downloaded.
If an image has a transparent background and you do right click->share->copy->bitmap, it's given a black background. Any way the transparency could be maintained?
>>19320 >'cat' and 'species:cat' This reminds me. The sites I download from have varying levels of tagging autism, which results in shit like Pokemon being in several namespaces. I'll have something like 'pikachu', 'character:pikachu', and 'species:pikachu'. Is there a good way around this or do I just need to wildcard the namespace whenever I search for one? I don't want to manually assign tag siblings for all of them.
It'd be cool if the blacklist feature was expanded to be for full searches rather than individual tags only. There's almost no tags that I want to entirely blacklist, but there's many searches involving those tags where I'd want to not download files that match it. I can think of plenty of those. It would help a lot with keeping me from accidentally downloading files I know I'm not gonna like.
(59.75 KB 396x395 nice_seal.jpg)

>>19318 >You should now always see siblings, on the pre-results above and the final results, even if the siblings have no count. You forgot to mention this applies to parents as well, which is very helpful for newly created parent oriented tags. One minor and practical non-issue, is that all zero count tags ever entered appear, rather than just those that exist only as parents/siblings.
Is there a way to hide the popups "...appears to be dead!" when force-checking? I might be legally blind.
>>19325 +1. Was just thinking this the other day. Some artists are good, but are into multiple fetishes, some of which I don't want to download. So, being able to blacklist something like "Salamander AND malesub" would be good. Then is would download everything by the artist Salamander except that combo of tags.
>>19328 Why not just whitelist Salamander but blacklist malesub? Are you okay with malesub art by other artists?
Is there something like tag history? Sometimes I accidentally delete a tag by hitting enter, expecting the tag edit window to close, but it just deletes some random tag.
>>19330 Click the gear and select "show deleted". All tags ever mapped to the file(s) will show up with an X by the number.
>>19331 X by the number for deleted tags, that is.
>>19331 >>19332 Thant's what I was looking for, thanks. Is there also a way to only show deleted by any chance?
Hi. I've been trying to modify the url parsers for 8chan.moe so that I get extra data such as the original filenames and the uploader name. I can get this data scraped and put into a note but for some reason it doesn't want to go into a tag. I need the info searchable though and as far as I am aware I can only search tags and not notes. Unless I'm missing something, I'm sure I've done everything right. Can you help? To scrape the info and try to put it into a tag, I did the following: -> network -> downloader components -> manage parsers -> 8chan.moe thread api parser -> subsidiary page parsers -> "replies files" (leaving out OP post files for testing) -> subsidiary parge parsers -> "files" -> content parsers -> add -> name or description: filename -> content type => tags -> namespace: filename -> change formula type => json -> edit formula -> content to fetch => string -> add -> match type => fixed characters -> fixed test => originalName -> content to fetch => string Test parse worked when I was checking it through but the actual tags don't get made when proper parsing is done. Please help me.
>>19333 Found out an alternative that's good enough. Basically select all tags and then enable the option, which will make the deleted tags deselected.
>>19252 Hey, just an update here. You don't need to do anything, but a user mentioned that this URL https://www.deviantart.com/sasurealian/art/Elsa-cosplay-660329780 results in a 1024x1024 png, whereas the download link on the page (if you are logged in) gives a 2048x2048 jpeg. I noted that you parse this in your downloader but marked it as frequently 404ing, so I understand why you made it low priority. To fix this, I need to update my downloader to handle 'this sometimes 404s' URL types so it can try such a file and then fall back to a lower priority. I don't know when I will do this, but it has come up a couple times before, so I'd love to slip it in one week. Anyway, apparently that artist is afflicted by this problem a lot, so if you needed a good example of this problem, that's it! >>19297 Hmm, that's weird. Sorry for the trouble. If you go into the 'file log' of the watcher and right-click on one of the items, do you get pic related for the 'parsed tags'? Should be a filename tag in there. If there isn't, it isn't being parsed. If there is, then 'tag import options' probably aren't set up to grab it. If you set up a new tag import options and then right-click 'try again' on an item in the file log here, it should work instantly, flicking to 'already in db', and apply the tag if set up, btw. It is a neat way to quickly test. If you are very confident that setting up a specific tag import options on a highlighted watcher and then doing 'try again' when there are parsed tags is not working, let me know. It works for me here, so my guess is the parser isn't grabbing it right. >>19298 I'm not a super expert at SSL errors, but you often get EOF ('End of File', usually means a problem with I/O) when either your connection or the server is having trouble. Could be you are overloaded, or more likely the server is, so try again later.
>>19299 To cover what >>19301 did not: 1) Not yet, but we need this. I was talking with another guy today about it. I'll figure out mass delete and rename. There's no mass import/export of notes yet, but I'll see what I can do with my recently updated sidecars system. 2) Not yet, but again yeah I want this. I want all times editable. First step will allow quick fixes of weird small situations, then I'll expose it to the Client API for bigger correction jobs. 3) Not yet, but I will definitely do this. Been a very long time, but it is waiting in my 'when you have a big week, how about this?' list. So, ha ha, I'm afraid it is 'yes in future' for everything. >>19303 Ah, I didn't say in my post. That is only on repositories, where you typically have a whole ton of good non-local files with good tags. I took it off for local tag services because it would be slightly confusing clutter that, for most users, didn't change much. If you have a situation where you need tag suggestions from deleted files or something else clever, let me know. >>19304 Not yet, but I want a jury-rigged system available when I add notes to sidecars. I'm going to let you hack a 'sidecar migration' that sources from tags and outputs to notes and vice versa, with string processing in between, so we can convert filename tags and all sorts to notes as appropriate. Yes, I want note content searching! I forgot about it, so thank you for the reminder. The database is set up to do this already, I just need to write the system predicate, UI, and search code. >>19308 I'm not a super expert, although I was looking there a while ago, but I'm pretty sure that's how a lot of modern web tech does infinite list browsing, often of 'related' results and shit when you are browsing phone-style. They can't give you a page number since the list is dynamic and always updating, so instead they give the client a random key that represents a browsing instance and then when you scroll down the client says 'hey give me more on search key abcdef0123', and the server handles all the position variables in the stream and just spams 20 more results back. There may be fixed pagination behind the scenes that the server keeps track of, but the client doesn't actually care since it just wants to know what to show 'next'. Now, you may be able to use this in a hydrus downloader. Sometimes the pagination cursor or token or whatever is given in your initialising request, which you may be able to jimmy into a future request URL if you are feeling clever.
(15.30 KB 538x141 25-19:38:01.png)

>>19336 Looks like it's not being parsed, there's no filename:filename tags in the file log. Which is weird, cause it shows up when I run a test parse in the network>downloader components>parsers>arch.b4k.com, I'm literally using the same URL for both. https://arch.b4k.co/v/thread/502380596/ The thread, if it matters.
>>19309 Table 'json_dumps' in client.db, you want the one with dump_type = 81. Sounds like you know what you are doing, so DELETE the 81 in the new db and just INSERT the 81 from old to new, and you should be good. It should revert to sensible defaults, if the favourite searches use custom services (which it won't recognise on the new client), but let me know if it throws any errors. >>19314 Hmm, thanks, I'll think about how to do this. I was thinking of adding some Tag Filter objects to tag suggestions and autocomplete results generally (Tag Filter is the thing you see in tag import options blacklist, amongst other places). Might work for you. >>19315 Not intended permanently, but I haven't got around to my planned 'recycling' system yet. The same is true of hash and tag definitions in client.master.db. My plan will make every table in the database say what definitions it uses in code, a bit like foreign keys, and will allow me to scan and purge all no-longer-used content. For now, that cruft just stays around, sorry! >>19319 >>19321 Thanks! This is looking good so far, had about seven positives total and no negatives, including on Windows 10, which I was afraid would be lacking some surprise .NET whatever. I think the new mpv works better, too--just a bit smoother to load, and I'm sure it has a bunch of bug fixes and stuff for tricky videos. I think I'll wait one more week to be a bit safer and roll this into the release. >>19320 Yeah I'm really pleased with it, feels much nicer now. Wish I'd done it ages ago. >>19324 I added an option to tags->manage tag display and search recently that restored an old behaviour, 'unnamespaced input gives (any namespace) wildcard results'. Give it a go, it basically replaces 'cat' with '*:cat' and will search any namespace including no namespace. I'm kind of mixed on it for technical reasons, but it does ok.
>>19322 What I'm going to do is add import and export folders to the Client API. You can pause your folders and just fire them manually under the file menu, so I'll add those 'run this now' commands to the Client API and then you'll be able to set up an external cron or whatever that does the job, hits 'curl -x POST' on the Client API to run the import folder, some sort of 'are any import folders running right now?' status check so you know when it is done, and then you can run any external cleanup you need. Not sure when it will happen, but that's the plan. >>19323 Thanks--I'll see what I can do! >>19325 >>19328 Nice idea. I've been thinking of ways I could have tag filter algebra, and just using the existing search UI and tech to declare more complicated rules makes sense. As weird as it sounds, I don't have tech to say 'does this file match this search' yet, but I am planning on it. This would be a more complicated expansion, but it is worth doing. Please keep reminding me if this gets no movement in reasonable time. >>19326 Yeah all sorts of zero count stuff is turning up, which suggests a particular tag lookup cache I have is going bananas. I'm going to put some time into it this week. >>19327 No, I don't think so, but this is a good idea. I'll see if I can make it remember it is being resuscitated and not mention it is 'still' dead if so. >>19333 I'll add this!
>>19334 That feels like you have done everything right. If the test parse shows it correctly, usually that is good. Stupid question, but is your 'tag import options' set up correct for it? By default, the 'watcher' tag import options don't grab anything, since most users don't like 'filename' tags. Actually, as in my pic here >>19336 , can you check your file log on a test download and see if the tags are being parsed there? If the import object in the log has the tag but they aren't both being applied, then it is a tag import options thing. Hit up network->downloaders->manage default import options to change your 'watchers' TIO. >>19338 Can you post me that parser (and ideally url class), so I can test it my end? I'll do some debug gubbins and see what's going on.
>>19341 These two?
>>19337 >Yes, I want note content searching! YAY!
>>19334 >>19341 Tag import options. I didn't know about the watcher default setting being different. Thanks.
Right-click on image: manage > file relationships > set relationship > [actions] Would it be possible to declutter the menu slightly and just put the actions one layer back, replacing the submenu "set relationship"? At the moment, traversing the "set relationship" submenu feels like a cursor mini-game. Whadya think?
>If you pull up only files with a duration, You get a total length of all video files Fuckin neato.
>>19336 >a user mentioned that this URL https://www.deviantart.com/sasurealian/art/Elsa-cosplay-660329780 results in a 1024x1024 png, whereas the download link on the page (if you are logged in) gives a 2048x2048 jpeg. I noted that you parse this in your downloader but marked it as frequently 404ing, so I understand why you made it low priority. Interesting. I don't have a deviantart account, so I wonder if that link that 404s for me is only available when you're logged in? Could someone with a deviantart account go to https://www.deviantart.com/sasurealian/art/Elsa-cosplay-660329780 while logged in and then save the page as HTML and post it to a pastebin or something? There might be a way to check if someone is logged in and only pursue that link if they are.
>>19345 For now, you could set up shortcuts for common actions.
(784.24 KB 632x844 1647641991517.png)

i got 2 questions when you install it in a folder on your desktop is it basically a fully portable file? and how do i make it completely offline? i just want to use it as an media browser for images and webms nothing more, is the default client setting good or do i need to make some tweaks? thanks alot for this btw just what i was looking for
>>19349 >when you install it in a folder on your desktop is it basically a fully portable file? If you're on Windows, there are two versions: the Installer version and the Extract Only version. The Extract Only version is portable. Everything is kept in the main folder. >and how do i make it completely offline? It doesn't connect to the internet unless you tell it to. The two most common ways of hydrus connecting to the internet are: You tell it to check a url and download things, or you connect to the Public Tag Repository (PTR), which is just a massive mapping of images to tags maintained by the community. If you never do these things hydrus will never connect to internet. If you're paranoid anyway, you may be able to set up something in your firewall to block the program from connecting to the internet. Also, make sure to read the help! https://hydrusnetwork.github.io/hydrus/index.html
When you right-click on an image and go to known urls and then open or copy, you can see the names of post urls and file urls that the image has. But in the top right of the media viewer, it only shows post urls. Is there a way to make the top right of the media viewer show file urls as well as post urls?
>>19351 Yeah, it's in network > downloaders > manage downloader and url display > media viewer urls and click the checkbox at the bottom.
I can't recall all the urls I've downloaded from and need to go back and retag some files for which the "has less than X tags" function is not fully sufficient in finding. I know how to pull up all files from a particular url, but is there any way to do "system:has url" so I can see everything I've used a downloader for?
>>19353 system:known url > has regex > ".*" (without quotes) maybe.
>>19354 Actually that may also return no urls, so ".+" is probably better.
For the last few weeks, Danbooru search fails intermittently. It doesn't seem to matter what the searched tag is, and it seems random when I get the error. I get this from the log: 500: The server's error text was too long to display. The first part follows, while a larger chunk has been written to the log.… (Copy note to see full error) Traceback (most recent call last): File "hydrus\client\importing\ClientImportGallerySeeds.py", line 366, in WorkOnURL network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1966, in WaitUntilDone raise self._error_exception hydrus.core.HydrusExceptions.ServerException: 500: The server's error text was too long to display. The first part follows, while a larger chunk has been written to the log. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Search Timeout | Danbooru</title> <link rel="icon" href="/favicon.ico" sizes="16x16" type="image/x-icon"> <link rel="icon" href="/favicon.svg" sizes="any" type="image/svg+xml"> Is there any good fix or workaround?
God, I love hydrus.
>>19354 Thanks, this works. >>19357 How much do you love it?
got couple questions if you anons dont mind answering 1 is there a way to make it so you can zoom in and out of the image preview without having to hold Ctrl like other image viewers? 2 how do i move the thumbnail panel on the right (pic related) to the left side of the preview 3 there a way to remove the envelope symbol in the top right of the thumbnails and get image details ( size, name, resolution) to show below the thumbnail (like 2nd pic related) thanks alot for developing this btw been playing around with a bunch of different image viewers and this one seems to be the best for image dumping on /wg/ and general archiving
Messing around with getting Hydrus to run in Qubes OS. I got it mostly working with a fedora-37 template, but mpv is badly broken. Instead of embedding into the preview, a rogue mpv window appears that can't be closed normally and blocks input into Hydrus. Audio works though. I just now got it running on a fresh debian-11 template after installing the following packages: python3 python3-venv mpv ffmpeg libmpv1 Might work with just "python3-venv libmpv1" if you dropped mpv and python3 because of dependencies getting roped in, I dunno though. I ran the venv setup script after installing those, answered qt5, old, and old after checking versions. It works but there's no sound. I think that's because of the old mpv thing, right? If I can get Hydrus running with all the bells and whistles in Qubes I'll post how I did it.
>>19359 >there a way to remove the envelope symbol in the top right of the thumbnails The envelope means the image is in your inbox. If you plan on keeping the image long term, and want the symbol to go away, archive it with F7 or the right click dropdown menu. If you personally tag your files, I recommend using the inbox as a list of files you've yet to properly tag and only archiving when you're sure something probably won't need any more tags. >been playing around with a bunch of different image viewers and this one seems to be the best for image dumping on /wg/ and general archiving It's really amazing I haven't seem more good multimedia viewers/browsers like this. It can play and view a great deal of filetypes, whereas you need some dedicated program for images, gifs, videos, audio files, etc, in most other cases. The preview window for all kinds of viewable/playable files is a godsend. I hope one day I'll never have to go looking for media files in a wangblows browser again.
(110.22 KB 106x76 python_qO1mDRgrY5.mp4)

I managed to get a first version of an increment/decrement integer rating service working, and I'm happy with it, but it needs more polish and my day tomorrow is crazy, so I'm putting off the release a week to finish it up and get more things done. v519 should be on the 8th of March. Thanks everyone!
>>19359 >1 is there a way to make it so you can zoom in and out of the image preview without having to hold Ctrl like other image viewers? file > shortcuts. Go to the "media viewers - all" section. Click add. Hover your mouse over the box and scroll up, and it should recognize it. Then, in the dropdown menu, choose "zoom in". Then simply repeat with scrolling down for zooming out. Click apply on all the windows. Tada, it works. But there's a something you should know: the shortcuts in the "media viewers - all" section apply to the preview window AND the main media viewer. (I don't think there's a way to make a zooming shortcut that only applies to the preview window). So technically this shortcut will also apply in the main media viewer... except if you check under the "media viewers - normal browser" shortcut section, scrolling up/down instead brings you to the next/previous image. And when I tested it out it seems that the navigation shortcut overrides the zooming one, so scrolling up/down in the media viewer won't zoom in, it just goes next/previous as normal. So I suppose everything's fine, actually. Still, if you want, you could make it consistent in both windows by deleting the 'ctrl + scroll -> zoom' shortcuts in the "media viewers - all" section and changing the 'scroll -> navigate' shortcuts in the "media viewers - normal browser" section to ctrl + scroll. That way scrolling zooms no matter where you are, and you can still navigate in the media viewer with the mouse wheel by holding control. >2 how do i move the thumbnail panel on the right (pic related) to the left side of the preview Don't think you can. Unfortunately hydrus doesn't have many options for moving around GUI stuff wherever you want. >get image details ( size, name, resolution) to show below the thumbnail (like 2nd pic related) Pretty sure you can't currently put metadata information on top the thumbnail, just tags. If you want to try that out it's under file > options > tag presentation, at the top.
This might be a big request, but would it be possible, with at least some accuracy, to have a function to detect whether an image has heavy artifacting?
for some reason, it seems that my subscriptions are using the "loud context" file import options instead of the quiet ones. I have it set so that loud contexts show inboxed or archived files, but quiet ones show only inboxed files, but my subscriptions are showing inboxed or archived files. I checked the subscriptions import settings and they're at "use the defaults" and then I checked my defaults and they're definitely set differently, so I don't know what's going on.
where are the clients settings saved? stuff like colours, size of window etc i just need it for new save since i deleted the DB
Dev, I have two small tips for the setup_desktop.sh. First, you should check if $HOME/.local/share/applications/ exists or else it will fail on new/minimal environments. I would use mkdir -p, as it will make any directories necessary and does nothing if it already exists. Second, there should be a check for client-user.sh and if it exists that should be used instead of client.sh. Thanks!
>>19282 >Hydrus can now show a checkerboard pattern behind a transparent image instead of the normal background colour. There's two new checkboxes, in options->media This may be a little broken. Regular thumbnails never show checkerboards, and preview thumbnails + actual image viewing always show checkerboards. Nothing changes in the media options. I have not checked the duplicate filter.
(987.67 KB 1620x1080 1657957178767.png)

>>19362 if you are a dev please for the love of god add the ability to move certain elements of the GUI around like the thumbnail panel from left to right and the ability to show image details such as size, res and name below the thumbnail separate from the tag system like this anon said >>19359 out of 6 different image viewers ive tried all got these things and its a massive cockblock that hydrus dont have it, otherwise its the best one i beg you
>>19288 this would be pretty sick
Is there a plugin or script that can scrape Stable Diffusion metadata for tags?
>>19366 every setting is saved in the database so if you deleted it it's lost
for the rating predicates, could you add an option to have them display inclusively. so instead of something like "rating >7/10" it could be shown as "rating 8+/10" or something like that. I find it a lot easier to reason about inclusive ranges here than exclusive ranges.
Is there a way yet to run an auto deletion on exact duplicates (byte for byte)? Like, keep one and automatically delete the rest? Since you don't need to view them, as they are perfect duplicates. I keep seeing new pics incoming that I know I already have, and they appear to be byte for byte duplicates.
>>19374 Kind of like the software No Clone.
>>19374 if they're truly the exact same file (byte for byte) then they'll never both be in hydrus. On import, hydrus will recognize the hash and say that this file is already in your db then it won't import it. If you actually mean pixel-for-pixel dupes (that aren't actually the same exact file) then systematic processing of these is a commonly requested feature, and he mentioned interest in working on it. I too hope it comes soon.
Hi. I've been starting to make custom parsers on Hydrus and came across associable urls. I tried to get a parses to give associable urls to every image in a imageboard thread (directly linkinging each media to its post). The test parser gets the urls but i don't see them attached to the downloaded media when I use the parses. Is there any reason it wouldn't work with imageboards?
>>19377 post your parser
>>19376 Huh, maybe they're just pixel for pixel then. I thought exact dupes were supposed to be detected and blocked. So, Hydrus is doing hash checking on every file then?
>>19342 Yes, thank you. I got held up last week with IRL bullshit, but I'll check this out this week and post what I discover. >>19345 Sounds good, thanks. I've been shaking a couple of these things up recently--I hate it when you get a single slender submenu header on its own to track your cursor down--and it is mostly because the menu is actually dynamic and could have five things in (when the file has a bunch of existing duplicate info to show) but in this click only has the one. I'll see if I can get it to shove the whole submenu up in this case. Another quick general thing I can do here is to rename common stuff from 'set relationship' to 'set', just to stop these menus spilling out across your screen. >>19347 Attached 'txt' of the html logged in with my throwaway DA dev account. The download button gives the attached jpeg, hash ffa5a9923dca3fe90008df0f8e4b291046dc8badf52950b1d245aaa471c4ea17. Original download URL my browser shows was: https://wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/099e428a-b21d-4b6a-86bc-dc17f6083d14/dax55pw-9d583bd9-583a-4b39-b01a-39f6d9ef5657.jpg?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOjdlMGQxODg5ODIyNjQzNzNhNWYwZDQxNWVhMGQyNmUwIiwiaXNzIjoidXJuOmFwcDo3ZTBkMTg4OTgyMjY0MzczYTVmMGQ0MTVlYTBkMjZlMCIsImV4cCI6MTY3Nzk4MzE2NSwiaWF0IjoxNjc3OTgyNTU1LCJqdGkiOiI2NDAzZmI2NWE0MTIxIiwib2JqIjpbW3sicGF0aCI6IlwvZlwvMDk5ZTQyOGEtYjIxZC00YjZhLTg2YmMtZGMxN2Y2MDgzZDE0XC9kYXg1NXB3LTlkNTgzYmQ5LTU4M2EtNGIzOS1iMDFhLTM5ZjZkOWVmNTY1Ny5qcGcifV1dLCJhdWQiOlsidXJuOnNlcnZpY2U6ZmlsZS5kb3dubG9hZCJdfQ.Imkwjz6AvlVohntHxwg2ebVVwvyZEBdn569Ny7l0ZGw&filename=elsa_cosplay_by_sasurealian_dax55pw.jpg Which isn't in the html directly, so I assume it is assembled by javascript, but that token is all over the place and the link elements are too, so maaaaaybe it would be possible to say 'if user seems to be logged in (no login button?), then grab the preview download link, strip off the '96px x 96px' part, and assume it will work'? Although this sort of hackery would obviously be better if I do the multi-URL expansion I was talking about earlier.
>>19353 As a side thing, I want to extend system:known url to do 'count' stuff. 'Has/No urls' would be useful, and 'has < 4 urls' kind of stuff too. >>19356 500 means a serverside error. Sometimes it can mean an overloaded server, but it can also be them over-reporting error severity for other reasons. It could be this: >>19285 >>19357 Glad you like it! >>19359 I talked to you somewhere else today I think. The zoom in the preview was unintended, but I'll clean that code and see if I can get preview panning optionally working too. >>19369 Yeah, I'd like to do this eventually. I have some legacy issues here, in that hydrus was originally wx and now it is hacked once-wx-now-Qt, but as I slowly grind out nicer UI code in future, I'd like to have a more dynamic layout system. Slowly getting there, but I can't promise it in any fast time. Aesthetic UI (along with nice decoupled code that does Eclipse-style rearrangement without blowing up) is a clear weakness of mine. >>19360 Thanks, that's interesting. I was talking with a different guy on an unusual Linux flavour the other day in a similar situation to you--basically the 'python-mpv' bridge that spawns an mpv window and passes the handle to Qt to look after is so hacked, and then we have the python layer on top, that any unusual Window Manager rules about that sort of stuff completely break the embed. My 'native' viewer is just a software renderer that pulls bitmaps from ffmpeg and throws them on screen, so yeah I'm afraid it is low performance and no audio. I have some notes on an alternate mpv renderer system that still uses libmpv but somehow embeds it in an OpenGL window or something, it is apparently more stable, so I'd like to try it in the next 18 months, if I have the time and energy. Fingers crossed it fixes mpv with audio for macOS too (which atm just hits 100% CPU even if you jump through the five hoops to get libmpv working).
>>19364 I've talked with some guys about this when we were looking at automatic duplicate resolution. We even looked up some scientific papers on how to really define what a jpeg artifact is. As I remember, there wasn't actually a well-defined and agreed-upon metric, and some software does it worse than others because they employ particularly wild interpretations of the jpeg algorithm. That said, you do see that software that highlights artifact fuzziness to help you figure out when something has been photoshopped, so there must be sensible practical metrics here. I think I am too busy to figure that all out myself, and 'pixel math' is outside of my expertise, most of the time, but if we found a python library that could take a bitmap and convert it to a 'black except white lines where there were lots of artifacts' kind of thing, we could do some fun with that like when you compare two images in the duplicate viewer or just counting the number of white pixels and make 'system:artifacting = lots (>5%)' or whatever was appropriate. The 'jpeg is low quality vs medium-high quality' line you see in the duplicate filter is actually pretty good though, even though it works the most hacked bullshit I found on some stackexchange post once, and that correlates well with artifacting in general. Maybe I could and should cache that data and let you search it? >>19365 Thank you for this report. I'll check what could be going on. Sounds like you have checked everything on your end properly, so I bet my 'get the right file import options' thing is working wrong sometimes. >>19366 >>19372 Check your recycle bin and backups for 'client.db'. All the core user/human stuff is stored in there. If you need to extract it and send it to another client.db, I can help walk you through it. >>19367 Thanks, I'll see if I can do it! >>19368 Ah, thanks, I didn't think of actual thumbnails proper. Those work on a different system to the preview and media viewer. I think I can add the checkerboard to them, since they have alpha just like the full-size image I load for the viewer, but I'll make sure it is optional, and default off, since the Gods tell me that most users would not want checkerboard on thumbs. >>19371 Nothing hydrus specific. There's probably stuff on the AI Generation side of the community, but I'm not into the technical side of that enough to speak confidently. You can see this text in the hydrus viewer using the EXIF button up top, and you can copy/paste from there, but I hate that little dialog--I need to make it more user-friendly and keep up with media previous/next and stuff.
>>19373 Sure, that's interesting! I'll have a think, since I use that language all over. I bet I can roll it into the same function and have an option for everything across the program. >>19374 >>19375 >>19376 >>19379 Not yet, but it will be the first 'automatic dupe resolution' I try. I do detect pixel-for-pixel duplicates, and the database now caches that info, hence how you can now search in the duplicates processing page for 'must(not) be pixel dupes', so pretty much all the spinning wheels are in place. First rule is going to be hardcoded--'if a jpeg and a png are pixel dupes, delete the png'. I'll hook up a maintenance routine to apply that and we'll get used to it and see where it goes wrong and how merge options need to work for an automatic system, and then I'll expand the hardcoded system to have more user-defined metadata options and tolerances, and, fingers crossed, we'll actually be able to get on top of real numbers of dupes without having to do anything. Most important, these systems will always be optional and user configurable. People differ on the right answers here greatly. For the 'byte-for-byte' question, yeah, hydrus hashes based on exact total file content. So you can't import the exact same file, but if you have two pngs with the same image content, one with some header info and the other with it stripped, hydrus considers them to completely different files. I'd like, now we have pixel hashes, to start chipping away at this! >>19377 If the post links here have # fragments, like https://8chan.moe/t/res/11075.html#11724 , then that's tricky. Fragments are entirely client-side (the fragment part is never sent to the web server in a request) and hydrus doesn't store them by default since in 99.5% of cases they cause dupes of the same basic associated URL. They are stripped in many contexts in hydrus code, and so here I assume the files are just being given their original watcher URL back again, which, since it is a Watcher URL, is not associated by default (unless you set that it should be in its URL Class). You can try clicking 'keep fragments' on the Watcher URL Class, but this has always been sketchy/prototype and it may cause the watcher to consider the same URL with different fragments to be different URLs and cause duplicate watchers. That might be ok if this is a private watcher and you aren't about to add the same URL twice with different fragments, but I'd recommend you not release it to the public without an explainer. The 'keep fragments' checkbox was added for Mega links, mostly, where the Mega App/Javascript does some bullshit with the fragment clientside and uses it to do more navigation than scrollbar position, and so it is essential for keeping in an archive. I think my recommendation is to just set the Watcher URL to be associable in its URL Class. I don't think you'll have to do anything extra--watcher files already get their parent Watcher URL in their associable URLs, so they'll get that URL immediately.
>>19382 >Maybe I could and should cache that data and let you search it? That sounds like a great idea so long as it's something you can implement easily as a new "system:quality" function or somesuch. Would help me, and probably others, to find out what images they only have in low quality so they can search them up on saucenao and elsewhere for the full res versions. Currently I just check on saucenao whenever I happen to find a low quality image by eye, and have no plans to actually sort through them all as it's simply too hefty of a task.
So I tried to fix the redgifs parser because it's been broken for a little bit, but I don't think I can. I looked around at the page and tried loading it in hydrus It looks like a recent update to the website turned it into one of those dynamically loaded web app things. since hydrus can't do anything with javascript the website just doesn't load for hydrus at all. There was an issue mentioned sometime before about baraag and the sankaku channel beta website, and it looks like this is the same problem. The fact that hydrus can't parse these is starting to become a bigger and bigger problem, because they're quickly becoming more common. Is support for being able to work with these kinds of websites planned?
>>19382 >The 'jpeg is low quality vs medium-high quality' line you see in the duplicate filter is actually pretty good though, even though it works the most hacked bullshit I found on some stackexchange post once I always assumed it was the "quality" metadata thing that image editing software asks for when exporting, where 0 is "do I look like I know what a JPEG is" and 100 is "the best JPEG can do". Is that the case or is it even more hacky?
I'm running from source, and I updated my fedora system from 36 to 37, then when I started hydrus, it wouldn't boot. It said ~/hydrus/hydrus git ~ Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/hydrus_client.py", line 19, in <module> from hydrus.core import HydrusBoot File "/home/{username}/hydrus/hydrus git/hydrus/core/HydrusBoot.py", line 3, in <module> from hydrus.core import HydrusConstants as HC File "/home/{username}/hydrus/hydrus git/hydrus/core/HydrusConstants.py", line 6, in <module> import yaml ModuleNotFoundError: No module named 'yaml' Critical boot error occurred! Details written to hydrus_crash.log in either db dir or user dir! so then I figured that I need to recreate the virtual environment, so I ran that script and then I got another error while the virtual environment was being created: ERROR: Ignored the following versions that require a different python version: 6.0.0 Requires-Python >=3.6, <3.10; 6.0.0a1.dev1606911628 Requires-Python >=3.6, <3.10; 6.0.1 Requires-Python >=3.6, <3.10; 6.0.2 Requires-Python >=3.6, <3.10; 6.0.3 Requires-Python >=3.6, <3.10; 6.0.4 Requires-Python >=3.6, <3.10; 6.1.0 Requires-Python >=3.6, <3.10; 6.1.1 Requires-Python >=3.6, <3.10; 6.1.2 Requires-Python >=3.6, <3.10; 6.1.3 Requires-Python >=3.6, <3.10; 6.2.0 Requires-Python >=3.6, <3.11; 6.2.1 Requires-Python >=3.6, <3.11; 6.2.2 Requires-Python >=3.6, <3.11; 6.2.2.1 Requires-Python >=3.6, <3.11; 6.2.3 Requires-Python >=3.6, <3.11; 6.2.4 Requires-Python >=3.6, <3.11; 6.3.0 Requires-Python <3.11,>=3.6; 6.3.1 Requires-Python <3.11,>=3.6; 6.3.2 Requires-Python <3.11,>=3.6; 6.4.0 Requires-Python <3.11,>=3.6 ERROR: Could not find a version that satisfies the requirement PySide6==6.3.2 (from versions: 6.4.0.1, 6.4.1, 6.4.2) ERROR: No matching distribution found for PySide6==6.3.2 and now every time I try to start hydrus, it gives this error: ~/hydrus/hydrus git ~ Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 166, in <module> import qtpy ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/hydrus_client.py", line 24, in <module> from hydrus.client.gui import QtInit File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 184, in <module> raise Exception( message ) Exception: Either the qtpy module was not found, or qtpy could not find a Qt to use! Are you sure you installed and activated your venv correctly? Check the 'running from source' section of the help if you are confused! Here is info on QT_API: QT_API was initially not set. Currently QT_API is not set. FORCE_QT_API is not set. Here is info on your available Qt Libraries: PyQt5 did not import ok: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 166, in <module> import qtpy ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred:
[Expand Post]Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 13, in get_qt_library_str_status import PyQt5 ModuleNotFoundError: No module named 'PyQt5' PySide2 did not import ok: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 166, in <module> import qtpy ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 24, in get_qt_library_str_status import PySide2 ModuleNotFoundError: No module named 'PySide2' PyQt6 did not import ok: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 166, in <module> import qtpy ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 35, in get_qt_library_str_status import PyQt6 ModuleNotFoundError: No module named 'PyQt6' PySide6 did not import ok: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 166, in <module> import qtpy ModuleNotFoundError: No module named 'qtpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/{username}/hydrus/hydrus git/hydrus/client/gui/QtInit.py", line 46, in get_qt_library_str_status import PySide6 ModuleNotFoundError: No module named 'PySide6' Critical boot error occurred! Details written to hydrus_crash.log in either db dir or user dir! I'm not a programmer so I don't know what's going on here.
>>19387 I just tested the ordinary linux binary version of hydrus that I used to use, and it seems to start fine, except that it still has the same mpv issue that caused me to switch to source in the first place. So whatever the problem is, it must have something to do with the virtual environment or my system python version, but I don't know how to fix that.
>>19387 >>19388 Your default python/python3 binary is probably pointing to some very recent version of Python, like python3.10; hydrus' dependencies aren't currently available on anything above 3.9. I'm in that same situation, so I use the following to install/upgrade, while in the repo's directory (you may want to delete your venv directory before): python3.9 -m venv venv source venv/bin/activate python3.9 -m pip install -r requirements.txt Then my start script is: #!/usr/bin/env bash cd /path/to/hydrus-git || exit 1 source venv/bin/activate exec python3.9 client.py -d /path/to/hydrus/db/ Hopefully this works for you! I started using the source version because of mpv issues as well, and I've been super happy with it.
Is there a way to split file location using file domains instead of weight? If not I think it would be a nice feature.
>>19389 yeah fedora 37 uses python 3.11 as its default python version. It has other versions in the repos, but the command "python" points to 3.11. Fedora always uses the most recent version of python by default. I tried to make something like what you did and it actually ended up working. Thanks! This feels kinda hacky though since I don't actually know what changed except that it's obviously using a different python version now. Like what I mean is if this breaks, I won't know how to fix it myself. But at least it works now.
Is there an easy option to continue gallery searches when you change the file limit? Opening the query, clicking the search log, and then retrying the last search every time is annoying. Also, why are there two current and pending options?
>>19380 >Attached 'txt' of the html logged in with my throwaway DA dev account. Thank you, this is helpful. It looks like there's an 'isLoggedIn' boolean in the json there that's set to true for you, so it should be simple to check whether you're logged in or not. >that token is all over the place Interestingly, the token in that "Original download URL" you have there doesn't show up in the json in your html at all. There are actually two different tokens - one that works for the resizes, and one that works for the original size. Though they look the same at the start (in this case, the first 171 characters are the same out of the ~500 characters), they're actually different. The html you sent there only has the resize token. So it looks like the original size token isn't provided in the json even if you're logged in. >The download button gives the attached jpeg That's perfect. The download button uses this link: https://www.deviantart.com/download/660329780/dax55pw-9d583bd9-583a-4b39-b01a-39f6d9ef5657.jpg?token=2b25b16da6206688277b291cb2dbb8a05c2437fa&ts=1677983157 which always 404s for me, but it looks like when you're logged in it redirects you to that "Original download URL" you have there. Ok, here's a new parser. It will check if 'isLoggedIn' is true, and if it is, pursue the download button link at a higher priority. Of course, you'll have to copy Deviant Art login cookies over to hydrus for this to be useful. Otherwise it'll behave as it did before, getting the largest resize it can. Please try it out. Also, I've added url classes for each type of file url. That means you can, for example, use "system:known url" and select the "deviant art resized file (low quality)" or "deviant art resized file (medium quality)" url classes to find all your deviant art downloads that are resizes and redownload them with this parser. Your mileage may vary, though. If you copy deviant art login cookies to hydrus: pretty much all of your low or medium quality resizes should have an original size available. If you don't: you can usually upgrade your low quality resizes, but it's pretty unlikely you'll be able to upgrade your medium quality resizes. However, if you're anal about jpeg compression, the downloader will still upgrade the resizes from jpeg to png, which looks a little better.
>>19394 >That means you can, for example, use "system:known url" and select the "deviant art resized file (low quality)" or "deviant art resized file (medium quality)" url classes to find all your deviant art downloads that are resizes and redownload them with this parser. Thanks! I might try that.
I had a good couple of weeks. There's some small bug fixes and quality of life, improved server janitor tools, and a new 'rating' service that lets you count with clicks. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=NNNRlPfoC7I windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v519/Hydrus.Network.519.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v519/Hydrus.Network.519.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v519/Hydrus.Network.519.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v519/Hydrus.Network.519.-.Linux.-.Executable.tar.gz I had a good couple of weeks prototyping a new rating service. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html inc/dec ratings So, under services->manage services, there's a new 'inc/dec rating' (for increment/decrement). It works pretty much like all the other ratings, and goes in the same places, but it is a simple number counter--left-click to add, right to subtract. Middle-click lets you edit directly. I've wired it up for system:rating, too, with only one oddness--since every file starts at count 0, there's no concept of 'not rated' for this service. If you want to count how often an image makes you smile or anything else, give it a go! I mostly did this for fun, but let me know how it works out. other highlights The manage siblings and parents dialogs now both try to show more of the full 'chain' of related pairs as you enter new tags. The manage parents one also has a new checkbox that let's you decide if you want to really show everything, including cousins (which can be overwhelming on the bigger tags), or just (grand)children and (grand)parents. I'm planning an overhaul to these dialogs, one where they don't have to load everything on boot, and this is a first step. I fixed my native animation renderer drawing things too small when your UI scale is >100%! Server janitors get a nicer account modification dialog this week, and a return of the old 'superban' functionality, which deletes everything an account has uploaded. Check the changelog for full details! next week I'm sorry to say I'm exhausted and lagging a bit. I'll clear some small work, nothing too special!
>>19383 >First rule is going to be hardcoded--'if a jpeg and a png are pixel dupes, delete the png'. That rule seems a bit malformed to me. Like if, somehow, the png has a smaller file size than the jpg, I'd rather have the png. IMO file type shouldn't play into it at all, between pixel perfect dupes the smaller filesize is always the one to keep, regardless of other factors like filetype or metadata.
>>19398 >regardless of other factors like filetype or metadata I agree with filetype mostly but not metadata. Some people like for files to have the relevant metadata. I don't really care, but it shouldn't be assumed by default. I think instead, it should check if both files have no metadata or identical metadata, then as long as one of the files isn't some poorly supported filetype (i don't think hydrus yet supports any truly obscure image formats so this shouldn't be an issue currently) it should pick the smaller one.
(100.91 KB 570x336 10-04:43:42.png)

Hey, are there any PTR Jannies here? I've noticed a lot of pic related recently, some files having dozens, and I was marking them for deletion. It was taking too long so I was about to run a tag migration, removing any booru:https://v.sankakucomplex.com/data/* tags. But I figured I'd ask if that was a good idea here first.
Dev, I asked several months back about being able to remove hashes and other info from deleted files. Has any progress been made in that direction?
>>19313 >It's even simpler than that. You can just copy all your files' md5 hashes with right click > share > copy > hashes (I think you need to be in advanced mode), then just paste them in gallery downloader dunno if anyone cares but i've found that this doesn't always work for rule34.xxx. they sometimes do some weird stuff to their files. i think they optimize them to save space. the point is, the image's hash sometimes changes. when you save a file from rule34.xxx, it has a filename that is an MD5 hash. in some cases, that filename MD5 is not the actual image's MD5. here's an example: https://rule34.xxx/index.php?id=4412156&page=post&s=view here's the API response for this image: https://api.rule34.xxx/index.php?page=dapi&s=post&q=index&id=4412156&json=1 as you can see in the API, the image's filename and hash is listed as: 24f62c4a44208db5ed6ec2eac3d052d2 but if you download the image and check, its MD5 is actually: ef49ff0768cbe0717b05517d5a152905 so in this case, if you import files you previously downloaded from rule34.xxx, you will lose the filename, which is what you want to search for to find the image again. so it's actually bad to import first. if you have files with these 32-character hash filenames that you might have downloaded from rule34.xxx, you want to be looking up their filenames, not their hashes.
(96.19 KB 975x691 123.PNG)

(128.08 KB 962x682 456.PNG)

>>19402 ok, something interesting has happened. i've managed to get hydrus to download both versions of the image - one with the hash the same as the filename/API, and the other with the hash different from the filename/API (the optimized one). the filename one is 3.19 MB and has the hash 24f62c4a44208db5ed6ec2eac3d052d2 and the optimized one is 3.02 MB and has the hash ef49ff0768cbe0717b05517d5a152905 here's another example: https://rule34.xxx/index.php?id=5010684&page=post&s=view the filename one is 946 KB and has the hash 17b4f1a378811e74536b7ac844d0b058 and the optimized one is 932 KB and has the hash ea54dbe3e8a3a0b8dec8a424113d2d94 it seems like if you already have one of the versions, and then give the url to an importer with "check hashes" and "check urls" set to "do not check", it will download the other version. i'm not sure why it's like this. my only guess is that depending on which machine (?) you connect to on the rule34.xxx side, you might get served the API hash one or the optimized one. and perhaps hydrus tries urls multiple times, so it eventually will get the other machine? i don't know how internet work. (sidenote: i know rule34.xxx has different cdn servers (https://rule34.xxx/index.php?page=servermap) but as you can see in the examples, both images apparently came from the same server, the 'us' one) TL;DR: none of this matters as the two versions are pixel for pixel duplicates. but it's still weird that you can get either one.
>>19338 >>19342 Hey, I am sorry again that I am back to this late. I have had a look now, and there's a couple of things I see: I tested it on this URL https://arch.b4k.co/v/thread/502380596/ . I am not familiar with this site, but I assume it is just a normal imageboard archive and the threads we are looking at are normal with a single OP and some replies. Just 4chan, right, so we are talking one file per post? The main problem I think is that the OP subsidiary parser grabs the filename ok, but the 'posts' subsidiary parser does not. I noticed first of all that the parser is very slow, even on a 1.33MB file it takes several seconds on my dev machine to parse, so I think it is grabbing a bit too much. The separation formula for the OP subsidiary parser makes 227 posts. You are grabbing everything with 'thread_image_box', but that is getting parts of all the normal posts too and there's a lot of messy results in the subsidiary parser test parse that seem to have image urls but either do or don't have md5 or filename tags. The 'posts' subsidiary parser's separation formula gets 501 separate posts, which seems to have half posts with just source time and the other half nicely parsed posts with md5, source time, url, and filename. In this case, since the 'only has source time' posts have no URL, they are being discarded, but it looks to me like some extra cruft that it would be great to remove. In any case, with your filenames, I think what is happening is the OP post subsidiary parser is grabbing mangled (no source time, yes md5, no filename) versions of everything, which is being added to the import list, and then when the 'posts' subsidiary parser tries to add its nice posts, they are all discarded because they were 'already in' the file log and it thinks they are duplicates. I expect they don't have nice source times as well as filenames. You can see there is too much messy extra gubbins if you do the full test parse on the top level parser and see there are 728 results, and scroll down to the bottom ones--they have no URL. Looking at it, I think the fix is to work on the subsidiary page parsers. Make them select just what you need. You might like to also put the 'watcher page title' in its own subsidiary page parser, like the 4chan parser does, so it isn't doing extra work and applying the title uselessly to every file post. I had a quick play around to see if I could alter the subsidiary parsers to just grab the OP/Posts and fix it for you, but that site's html is not the nicest. That got me looking around and I chased up the 'about' stuff on that site and got to here: https://foolfuuka.readthedocs.io/en/latest/code_guide/documentation/api.html If I convert the URL to that format, I get https://arch.b4k.co/_/api/chan/thread/?board=v&num=502380596 which produces some really nice JSON, much easier to parse. You have done good work on this HTML parser, but it might be easier and more future-proofed to rework it into a JSON parser and have an API-redirecting URL Class. The 4chan parser works basically the same way, so you can check it out and the related URL Classes to see what it does. Let me know if I misunderstood anything or can help with anything else.
>>19385 Yep, this is a problem and it is getting worse. I think all mainstream sites are going to go this way in time. The basic 'correct' answer is 'use their OAuth API'. OAuth is basically a semi-clever login system for Apps, and it works well, and modern APIs usually make parsing anything you like a breeze, but the problem is OAuth is corporate bullshit written for their benefit rather than people like us. Apps have to register tokens or run server to distribute client tokens, and there's a whole infrastructure going on that is run at the enterprise level by an army of consultants. If you've ever logged into a site using your google account, that's the basic stuff going on. It is not how the internet used to work (open and free and fun, 'hey check out what I did'), but it is how really big tech companies want it to work in future, and everyone else is along for the ride. As much as I find it distasteful, I can't avoid it, so I think at some point I'm just going to have to roll OAuth support into hydrus (there's a really nice python library for it), and rewrite the login system to handle OAuth api keys. Many sites allow users to generate their own api keys in their advanced settings somewhere, and while that allows them to track your usage and cut you off if they don't like you, it does fundamentally work, and it means hydrus doesn't have to support a key infrastructure. As an example, if you want to look around: https://github.com/Redgifs/api/wiki https://github.com/Redgifs/api/wiki/API-access The API looks great, but the access system sucks. >>19386 That 0-100 number isn't available in the jpeg header anywhere, unfortunately, and I don't know how rigorous it really is. It might just be a parameter for libjpeg or something, and most people use that, so it is what you see. Anyway, the way I do it is by pulling the jpeg 'quantization tables' (what are these? I guess some DCT gubbins for jpeg construction?) and then summing their total value. Turns out very low quality jpegs have a large sum, and high quality have a very low sum. There's a strong correlation, so it works really well, despite being slightly incomprehensible. https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/core/HydrusImageHandling.py#L644 >>19387 >>19392 Sorry for the trouble here. I had a report from a 3.11 user the other day that you have to run a later OpenCV too. Were you using my 'setup_venv' script, or doing it manually? I guess the latter, if you are creating the venv explicitly. If you use my newer script, it has a bit of interactive text now that asks for input based on your python version. I _hope_ that it walks people through correct versions of things for 3.11 now. There's more info on this here I think now, if it helps: https://hydrusnetwork.github.io/hydrus/running_from_source.html The magic Qt version for py3.11 is probably 6.4.1, and opencv-python-headless 4.5.5.64. You can find these numbers in the extra requirements.txts under install_dir/static/requirements/advanced , which is how my setup script assembles this stuff. >>19390 Not yet, but I'd love to do that! By any simple metadata really. Archive/inbox has been another talked-about one.
>>19393 Shit, thank you, I'll fix whatever is going on with 'current and pending'. I was supposed to add 'pending only' a while ago, either the entry is bad or the label is. There's no super nice way to restart the gallery search on file limit changes, I'm afraid. If it helps, you can right-click on downloader page queries now and go straight to their search log without needing to highlight them. And the 'set options to queries' button lets you mass apply the 'higher' page options to every selected query. >>19394 Thanks, this looks awesome! I'll give it a test and roll it into the defaults. >>19398 Yeah, I will always make it optional and increasingly editable so you can set up whatever rules you like. I push this 'if jpeg and png pixel dupes, throw away png' rule is that it is almost always because someone did a 'copy image' from a phone/discord and then pasted that bitmap into a web upload form. It is lazy posting that results in bloated png copies of jpegs. Outside of weird examples, like perhaps a single 1x1 black pixel image, the png is always going to be a waste of space, and always always always (since pngs are lossless, jpegs not) derivative of the original jpeg. But even then I won't turn this on by default. >>19400 Sorry for the trouble--we are trying to fix it now. The new 'delete everything a user uploaded' tool I just released did not work right--it froze the whole PTR for six hours and didn't delete what we wanted it to. I'm going to have another go this week, and then they'll try and delete these booru: tags for real. I'm pretty sure they added booru: to the PTR tag filter about a week ago, so new ones can be created. Feel free to run a mass delete using migrate tags, but maybe wait a couple weeks since I want to make sure I got this command right, and we are testing it on a subset of these booru tags. Once we are happy on that, please do, and it'll help them chase down the last stuff. >>19401 Not much, I am afraid! It will need me to do a lot more database code cleaning and overhauling to build the 'recycling' infrastructure. If I told you a manual database hack to 'rename' hashes, keep doing that. >>19402 >>19403 I can't talk super confidently here since I haven't looked into it, but the big CDNs like Cloudflare cause this a lot. The files that get hit a lot (like those on I assume the non-API domain) will get automatically shuffled into a cache that then runs 'optimisation' on the file. For pngs, that preserves the image perfectly but will strip out the header. For jpegs it will actually alter the image -_- It can be tricky to test this stuff, btw, since it is often caused by location-based caches. You can even get a different file from the same URL a couple hours later. My basic conclusion, in devving hydrus, has been that you can trust an URL to generally give you the same basic thing, but never trust it to give you the same bytes. If you think a certain hash lives at a certain URL, don't have 100% confidence, because there are about five different things that can change it in one way or another. Same deal if you are sending a file to someone via discord or something--if you need it to be hash-perfect, zip it up first.
Edited last time by hydrus_dev on 03/12/2023 (Sun) 01:43:53.
>>19404 I didn't make the parser so any praise should be directed at someone else lmao. I guess I'll just tag that thread by hand and then try to avoid using the site going forward. Thanks for the write-up though, hopefully the original author or someone who understands it can make an improved version. Parsers are a little too complicated for me currently so I can't really use this information unfortunately.
>>19406 >For jpegs it will actually alter the image -_- i was worried about that too, but both of the examples i showed are actually jpegs, and they're still pixel-for-pixel duplicates! it appears to be lossless compression in rule34.xxx's case. it's just something you have to keep in mind if you want to fetch tags for files you've previously downloaded from rule34.xxx outside of hydrus (i have a lot of those). it's particularly an issue if you've renamed your files... though i assume people who install the PTR won't have this problem at all.
Dev, I sent an email regarding some progress I made on embedding mpv in Qubes. It's from a cock.li owned address so you may need to check your spam. >>11773 This would be nice, sometimes I check "download previously deleted files" when downloading an artist's stuff and end up downloading files I've already deleted in the duplicates filter.
I really wish that when you mark files as same quality duplicates, that instead of them continuing to act like independent files that show up in searches together and have separate tags, that they would just be treated as the same file by hydrus, with identical tags and not having them both show up in a single search. It would also be cool if the clickable known urls for same quality duplicates show up when looking at a file in the media viewer. It's annoying to see a file and think "what I thought I got this file from this website but it's not here" only to later see that it was actually on a duplicate of the file that you have in your db that has that url as a known url.
>>19410 also great work so far this is a really cool program. It just has a lot of little annoyances like this, that's all.
Hello, I am new here. I have a question. How can I strip some tags that I'll never use. For future and past files. Like I don't want any of downloads or my pics with "black_hair" tag.
>>19410 why not just click "this is better, delete the other"? if you don't want both, why keep both??? >>19412 if you go to tags > manage tag display and search, you can set it so certain tags or namespaces won't display in the program. your files will still have the tag, but it just won't be visible. you'll have to set it up for each tag service and for both single/multiple tag views though. the best way to do this is to set up a favourite. in the "edit tag filter" window, add the tags you want to blacklist, and then click "save" at the top to save it as a favourite. then in the other "edit tag filter" windows, you can just click "load" and load up those tags. but if you ever want to update which tags are in your blacklist you'll have to save it as a favourite again and load it in each filter again.
>>19413 Thanks for answer. I figured that out but I asked to find out if there is better way.
>>19414 what do you mean by "a better way"?
Any way we can get this into Hydrus? AI tagger. https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags It looks at a pic, and comes up with some really good tags. After downloading from Pixiv, it would be a GOD SEND as those guys don't tag much ( and their pics go the whole range of fetishes, some of which curl my toes :) ) I see at the bottom of their web page, it has an API :) hint hint! I've tried it, and it does a really good job of making MANY tags, right down to the shape of the hair clip in the girls hair. Amazing!
Here's an example of the tags I got out of a pic of 4 anime girls. multiple_girls, breasts, chain, 4girls, nipples, blue_eyes, small_breasts, nude, blonde_hair, red_eyes, cuffs, v, bdsm, collar, red_hair, long_hair, yellow_eyes, bound, bondage, pussy, slave, censored, navel, shackles, one_eye_closed, pink_hair, smile, short_hair, looking_at_viewer, braid, ponytail, tears, green_eyes, white_hair, blue_hair, blush, restrained, half_updo Pretty good for AI!
Here's another one that impressed me. 1girl, chalkboard, pantyhose, gag, improvised gag, bound, bondage, solo, bdsm, teacher, glasses, skirt, english text, high heels, brown hair, rope, pencil skirt, gagged, short hair, tape, arms behind back, tape gag, brown eyes, belt, blush, shirt, shibari, speech bubble, indoors, white shirt, black skirt, miniskirt, chalk, office lady It can even tell she's a teacher! Pretty impressive.
>>19404 Hi Devanon, this might be a bit niche, but would it be possible to have an import option that updates the "imported to hydrus time" for files already existing in the database? This would be helpful when importing image-sets with a set order, allowing for the files to be presented in the correct order when sorting by import time. Currently a workaround would be to delete all the files and reimporting them, but that's quite cumbersome
>>19416 >I see at the bottom of their web page, it has an API :) hint hint! hydrus also has an API :)
>>19416 hydrus-dd already exists, which runs locally. The downside is that it uses deepdanbooru which is apparently slower and less accurate than SmilingWolf, though the hydrus-dd dev has expressed interest in adding support for other models.
Hydev I have some questions about file viewing statistics. >Do views count in the archive/delete filter? >If I select an image in one page and it appears in the preview viewer, then I switch to another page, the first page will remember what was in the preview viewer when I come back to it next. Was it counting up the viewing time while I was on a different page? >If the media or preview viewer is open but minimized does it still count up the viewing time? >If the media or preview viewer is open but isn't the focused window does it still count up the viewing time? >If the media or preview viewer is just an "open externally" button does it still count up the viewing time? >If the media or preview viewer has some media with duration, does it still count up the viewing time if the media is paused? >I noticed that in the options you can set a cap for the media/preview viewer for a maximum time for a view. I'm not sure how that works for media with duration. Does that mean that if I watch something longer than that time, that extra time I spent watching won't be recorded? For example, if the cap is 10 minutes, and I watch something 20 minutes long in one go, will it only record it as 10 minutes of viewing time? Thanks for reading. And thanks for making hydrus, it is an incredible program!
>>19421 That SmilingWolf is REALLY impressive. I hope that we can get that incorporated into Hydrus. Would do wonders for the pics with few tags.
>>19421 >>19423 I've also noticed on testing, that SmilingWolf does a much better job of detecting loli, which Pixiv seems to have a problem tagging. I'd really like to get those pics out of my collection, and an autotagger that could catch them would be perfect.
>>19413 >why not just click "this is better, delete the other"? if you don't want both, why keep both??? I want them both to be in my collection because I want to have hydrus recognize that I have the files at the known urls of all of them. Also, the hydrus help pages explicitly say to mark them as same quality dupes if you don't see that one's better than the other, so that's what I've been doing for years at this point.
(27.39 KB 591x260 1.png)

Hi, can we have N last custom reasons memorized? It's very disappointing to write them repeatedly when you sequentially edit tags for large sets of files.
>>19425 >I want them both to be in my collection because I want to have hydrus recognize that I have the files at the known urls of all of them. on the duplicates page, click "edit default metadata merge options", then pick 'this is better' or 'same quality' and at the bottom set "sync known urls" to "copy from worse to better". now the duplicate filter will copy urls as well. >Also, the hydrus help pages explicitly say to mark them as same quality dupes if you don't see that one's better than the other, so that's what I've been doing for years at this point. dunno why it says that. why would you need two images that are the same? the duplicate filter lets you copy over all the important stuff to one file. even if they're identical to the human eye i just pick one to be the 'this is better, delete the other' so that everything gets merged together.
>>19427 >on the duplicates page, click "edit default metadata merge options", then pick 'this is better' or 'same quality' and at the bottom set "sync known urls" to "copy from worse to better". now the duplicate filter will copy urls as well. I was told before by hydrus dev himself not to do that. He said that it would mess with the downloading logic if known urls didn't actually point to the exact file that they say they do. This was around a year ago or so, but he said then that keeping both files was indeed the thing you were supposed to do in situations like that, so the downloader logic has correct information. I'm fine with keeping both. I just wish it was less of a hassle to deal with having both, that's all.
>>19428 you're right, it would mess with the downloading logic in the sense that you would be lying to hydrus about where a file came from. but if you considered the images as being the same, that shouldn't matter? i suppose that would be a problem if someone did 'this is better, delete the other' on images that aren't actually identical and then later gave hydrus the url again and it added incorrect tags. do people use the duplicate filter like that? here's another option: if you're really keen on knowing whether you downloaded an image from a certain site, you could edit your parsers to grab the domain name from the url and put it under a namespace like "downloaded from" or something. then you can just do 'this is better, delete the other' and when the tags get merged you'll still know if you previously downloaded a file from that website even if it got deleted.
>trying to make downloader for h-flash.com >make a URL class with type 'post url' that matches my example post's URL >make a parser that extracts the download link from the HTML of the example URL >ensure the two are linked >paste URL into URL import page <finds 1 new URL, then tries to download https://h-flash.com/data instead of the URL the parser gets What a fuck? Anyone have a clue as to what's going on, or would I need to post what I have so far?
>>19430 I realize what's going on now, it's extracting the download URL and then applying the post URL normalization. h-flash has URLS that go like h-flash.com/flash-name-here and actually houses the swf files at h-flash.com/data/swf/... so I need to figure out how to make an exception. Will post here when I get it working.
>>19430 Hard to say. Sounds like you messed up somewhere. Use the test windows to see what Hydrus is seeing.
I had a good week. I mostly worked on bug fixes and quality of life, particularly on the tag autocomplete interface. The release should be as normal tomorrow.
Is there a way to escape colons? I noticed that the pixiv downloader is grabbing tags with colons in them, but hydrus is interpreting them as namespaced tags. If there's a way to escape colons so that those can act like they're supposed to and just be unnamespaced tags with colons in them, can you adjust the downloader to do that? thanks.
https://www.youtube.com/watch?v=outcGtbnMuQ If you are an engineer but didn't see the above vid yesterday, I strongly recommend it, the whole thing. We're clearly in the inflection point of this tech right now, and if you aren't experimenting with integrating some variant of it in some way into your work workflow, I think it is time. I just started using it for some scripting jobs I had been putting off this week and it honestly really helped out. There's problems aplenty in the whole space, but I'm pretty AI-pilled right now. windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v520/Hydrus.Network.520.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v520/Hydrus.Network.520.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v520/Hydrus.Network.520.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v520/Hydrus.Network.520.-.Linux.-.Executable.tar.gz I had a good week. There's a mix of bug fixes and improvements to quality of life, mostly in the tag autocomplete interface. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights So autocomplete has a couple new things. First off, there's a new 'all files ever imported/deleted' entry under the file domain button. This searches everything your client has ever seen, runs super fast, and works with 'all known tags'. Also, when editing tags, I've tightened up the selection rules and reduced the sibling spam--now, you should get the 'ideal' of what you typed at the top, then what you actually typed (if that differs), and then normal results. Also, the various paste buttons tucked into edit autocompletes now only ever add items--if any of what you paste is already in the accompanying list, it won't be removed now. And I changed the '(displays as xxx)' sibling indicator. It is now a simple unicode arrow, and because I wanted to play with gradients, I made it fade its colour from one namespace to another. If you don't like the text or the fade, you can change them under options->tag presentation. There's a new Deviant Art downloader, thanks to a user's work, that pulls the 'original' quality from the 'download' button, which some creators turn on for logged-in users. There's also five new File URL URL Classes, which you can search with system:known url, that will categorise your existing DA downloads according to their quality. Enterprising advanced users might like to play around with this! next week Next week is slated for code cleanup. I want to do some boring database breakup work!
>>19421 Well, I left the dev a comment on the hydrus-dd site, saying I hope he will bring in SmilingWolf's stuff. Someone else had already beat me to it a month ago, so I joined them in praising it. Here's hoping! I like that it seems to detect loli pretty well, as a lot of Pixiv artists aren't labeling it. I would like to purge my collection of it.
>>19435 >And I changed the '(displays as xxx)' sibling indicator. It is now a simple unicode arrow Nice change! Could you let us set the color of the arrow?
>>19435 >So autocomplete has a couple new things. First off, there's a new 'all files ever imported/deleted' entry under the file domain button. This searches everything your client has ever seen, runs super fast, and works with 'all known tags'. Also, when editing tags, I've tightened up the selection rules and reduced the sibling spam--now, you should get the 'ideal' of what you typed at the top, then what you actually typed (if that differs), and then normal results. Also, the various paste buttons tucked into edit autocompletes now only ever add items--if any of what you paste is already in the accompanying list, it won't be removed now. >And I changed the '(displays as xxx)' sibling indicator. It is now a simple unicode arrow, and because I wanted to play with gradients, I made it fade its colour from one namespace to another. Fantastic work.
>>19434 Double colons I think (at least that's how ":d" looked like inside sidecars, ie "::d", when I imported some files).
Tried to work on a json sidecar importer for use with gallery-dl, but I wasn't able to get it to do multiple types of parsing. So for example I can get general tags but not character tags at the same time (different entries in the json). I also didn't see a way to parse anything aside from tags, like URLs or modification times. Am I just blind or is this not possible yet?
Hey guys, does anyone know how to limit the download queue to just one download at a time? I know you can do like 20 in parallel, but my machine is old, and I want to set up like a "magazine" of download queues with just one going at a time. So, 19 would be pending, and one would be downloading. When the 1st one finished downloading, it would say DONE, and then the next queue under it would start downloading until it is DONE, and so on. Thanks!
Also, found a bug. If you want to open a pic in file browser, you have to do it twice. The first time just starts at the top of the folder. The second time opens file explorer and highlights the pic.
>>19439 thanks for the idea, but I tried it and it doesn't work. It just interprets that as still being a namespaced tag, but now the "subtag" starts with a colon. Backslashes don't work either, which is sort of the standard way to escape characters. So I don't know what to do.
>>19443 The only thing I could think of is the way URL's do escape characters, using the % sign and the ascii equivalent of the character. Like, if you wanted to escape a # sign, it would be %23. 23 is ascii for # character.
>>19408 Interesting! I think I've encountered this a couple times before--there's a little metadata in jpegs too, so I suppose some optimisers can strip them--but it isn't common. As you say, the good news here for PTR users is that someone probably downloaded the same file as you, whatever version you have, and/or they duplicate-merged with that file, and you then get the tags downstream of that. I wonder what the right default 'automatic' thing to do with pixel-duplicate jpegs is--probably preference larger filesize, since it might have some useful EXIF or something, but let users decide otherwise if they want. >>19409 I am very sorry, I don't have that in my inbox or spam. Maybe I deleted it from spam by accident, since I do clear it regularly. Can you send it again, hydrus.admin@gmail.com, or post here? >>19410 Yeah, my dream for the future of duplicates is to have a 'live' syncing connection where the files are thereafter 'the same', so any change you make to one gets merged with the other, but that system is so much more complicated than what we have now, and what we have now has a lot of duct tape, that I am not ready to move up. As we get a ton more duplicates with automatic duplicate resolution and duplicate stuff on the API happening, I'm going to have to revisit this and plan out what I can feasibly do. >>19411 Thanks, glad you like it! >>19416 Yeah, I am very excited about AI tagging. In the coming years I expect it to take over the PTR for many classes of tag like character names and clothing. In general, I think the best solution is going to be through the hydrus API for now, so you have a script that searches for and grabs files from hydrus, sends them to the classifier for analysis, and then sends the tags back to hydrus. I know some API guys are working on ad-hoc scripts for this now and if everything works nicely they plan to figure out some nicer, generalised systems like how hydrus-dd works. I know some guys who are training these systems using the PTR's data right now! In future I'd like to establish some sort of protocol for the manage tags dialog, in the place where advanced 'file lookup scripts' are atm, or 'related tags', where you can set up some AI profiles by specifying folder location or exe path or, more likely, a script that a hydrus dude writes, whatever works best, and then select one from a dropdown and click a button in the hydrus dialog and this process will be triggered and you'll see some tag suggestions. In the more distance future, I wouldn't be surprised if hydrus gets an automatic maintenance system that scans new files and applies AI tags to a separate tag service without you having to do anything.
>>19419 Yes, I want some answers here! First step will be to let you manually edit all the timestamps of a file in the client. My hope is this problem is rare enough that that will be enough, and then I was planning to open the edit commands to the API anyway so for larger jobs people can write scripts to figure out the answer. Having an import options to fudge an answer is an interesting idea--I think I'll start with the manual dialog, which I guess will work on one file at a time, and when that as awkward and slow, maybe we can figure out an extension to that system that says something like 'set this file to this timestamp, and set the five before it to a cascade, each one one second earlier'. Idk, but I know exactly your problem and I want some solutions. >>19422 >Do views count in the archive/delete filter? Yes, but now I think of it, I should add a checkbox to turn it off, like for the duplicate filter. For the next, briefly speaking, it isn't super clever--the rule is that it does a 'save a count and current viewtime' call on the previous active media every time a media viewer gets set to 'no media', with view duration being since it was loaded. As long as something is 'open', it is counting. >If I select an image in one page and it appears in the preview viewer, then I switch to another page, the first page will remember what was in the preview viewer when I come back to it next. Was it counting up the viewing time while I was on a different page? No, counts as two separate views, only counting when it is visible. >If the media or preview viewer is open but minimized does it still count up the viewing time? >If the media or preview viewer is open but isn't the focused window does it still count up the viewing time? Yes in both cases. >If the media or preview viewer is just an "open externally" button does it still count up the viewing time? Yes, I guess it does! >If the media or preview viewer has some media with duration, does it still count up the viewing time if the media is paused? Yes, still counts up. >I noticed that in the options you can set a cap for the media/preview viewer for a maximum time for a view. I'm not sure how that works for media with duration. Does that mean that if I watch something longer than that time, that extra time I spent watching won't be recorded? For example, if the cap is 10 minutes, and I watch something 20 minutes long in one go, will it only record it as 10 minutes of viewing time? Yes. If the max amount of view time is 10 minutes but you look at it for 20, it will save 10 minutes when you click off the media. This is interesting. I don't use view time all that much in my personal client, so I hadn't thought much about how it would track on video. Sounds like I could do with some options here. And maybe I could pause the viewtime if you are looking at video that can be paused. And turn off tracking on 'open externally' buttons. What would you like to happen? The main reason for the 'don't save more than x minutes for a single view' thing is just to stop you getting a record for 27 hours if you click a file in the preview viewer and forget about it, or same situation for a media viewer you accidentally left open below other windows. Maybe for videos, the max time should be the max of that value and the actual video duration?
>>19445 >Can you send it again, hydrus.admin@gmail.com, or post here? Sure. It's a bit long, which is why I sent it via email. I'll post it here since there's a slim chance others will find it helpful. Here it is: I think mpv in Qubes may be fixable without a total rewrite of the mpv integration. I installed Hydrus on a debian 11 template again, this time using "test-old-old" on the venv script. Qt6 seems to be detected and works, but mpv still doesn't embed properly out of the box. All results below are done with preview and media viewer set to use mpv. I noticed that exactly half the time mpv embeds properly. The first time it tried to embed it would open in a rogue mpv window, and the second time it would embed into Hydrus instead. Selecting a video and using the arrows to move the selection would go rogue-embed-rogue-embed and so on. Going from preview to media viewer continues whatever embed state, so if it appeared in the rogue window in the preview it would start in the rogue window when opened in the media viewer as well. Once in the viewer, the same pattern appears; going to the next file switches rogue-embed-rogue etc. While messing around with the rogue mpv window, I got it to embed 100% of the time with a magic ritual. I can reproduce this, I've done it over several restarts of Hydrus and it has worked every time. If I click a video to preview, "close" the hydrus mpv player window (hit X, which doesn't close the window) that pops up *while it is displaying a video preview*, and then open and close the media viewer it from then on properly embed into both the preview and media viewer with sound every time. The rogue mpv window is still there, silent and frozen on whatever frame it was on when it was "closed", but it can be minimized just fine. I can then use Hydrus normally with mpv working, opening/closing the media viewer and changing the previewed file with no issue. Error toasts pop up saying libmpv core has been shutdown if I try to change the volume via the slider and two appear the first time I do this process per boot. If I close the mpv window while it is black (when mpv is properly embedding into hydrus), the actually embedded mpv will continue to play on top of anything else that gets embedded (both audio and video). The audio also continues even when nothing is selected. Pretty easy to avoid doing this though. I think that mpv in Hydrus can work on Qubes in its current implementation, it just needs some tweaking I think. This may also fix that weird window manager thing you mentioned another anon having. I don't know why this magic works, but maybe these tracebacks will help. If there's anything I can do to help (trying whatever in Qubes) I'll gladly do it. Initially "closing" the rogue mpv window after a video is selected and it pops up (while the rogue window is displaying the preview): v519, linux, source ShutdownError libmpv core has been shutdown File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1257, in paintEvent self._Redraw( painter ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1013, in _Redraw self._last_drawn_info = self._GetAnimationBarStatus() File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 972, in _GetAnimationBarStatus return self._media_window.GetAnimationBarStatus() File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUIMPV.py", line 323, in GetAnimationBarStatus current_timestamp_s = self._player.time_pos File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1777, in __getattr__ return self._get_property(_py_to_mpv(name), lazy_decoder) File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1751, in _get_property self.check_core_alive() File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 901, in check_core_alive raise ShutdownError('libmpv core has been shutdown') Opening the same file in the media viewer after "closing" the mpv window (first time only): v519, linux, source ShutdownError libmpv core has been shutdown File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvas.py", line 1343, in ClearMedia Canvas.ClearMedia( self ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvas.py", line 737, in ClearMedia self.SetMedia( None )
[Expand Post] File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvas.py", line 1577, in SetMedia Canvas.SetMedia( self, media ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvas.py", line 1221, in SetMedia self._media_container.ClearMedia() File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1781, in ClearMedia self._DestroyOrHideThisMediaWindow( self._media_window ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1468, in _DestroyOrHideThisMediaWindow media_window.ClearMedia() File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUIMPV.py", line 270, in ClearMedia self.SetMedia( None ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUIMPV.py", line 560, in SetMedia self._player.pause = True File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1782, in __setattr__ self._set_property(_py_to_mpv(name), value) File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1768, in _set_property self.check_core_alive() File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 901, in check_core_alive raise ShutdownError('libmpv core has been shutdown') Trying to use the volume slider after the ritual: v519, linux, source ShutdownError libmpv core has been shutdown Traceback (most recent call last): File "/home/user/Hydrus/git/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/home/user/Hydrus/git/hydrus/client/gui/canvas/ClientGUIMPV.py", line 616, in UpdateAudioVolume self._player.volume = self._GetCorrectCurrentVolume() File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1782, in __setattr__ self._set_property(_py_to_mpv(name), value) File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 1768, in _set_property self.check_core_alive() File "/home/user/Hydrus/git/venv/lib/python3.9/site-packages/mpv.py", line 901, in check_core_alive raise ShutdownError('libmpv core has been shutdown') mpv.ShutdownError: libmpv core has been shutdown
>>19426 Great idea! >>19431 Ah yeah, these situations can be tricky. Check out the 'do not allow any extra path components?' checkbox in the options of the url class, which can help some situations where you have over-matching. Another trick, if you don't have one made, is to set an explicit 'File URL' URL class for your swfs here. Some of the hydrus internal logic benefits from an explicit file url declaration like that and can figure out what is what better. A pain in the ass trick is to make the path component looking for 'flash-name-here' actually a regex that says 'anything except "data"'. I think: ^(?!data).+$ And remember you can paste both of your test URLs up in 'manage url classes' any time to see what it wants to do with the current url classes. >>19434 Damn, can you give me an example URL where this happens? There aren't nice solutions here, but I can have a think. I guess forcing a parser to say 'hey these are all unnamespaced, so if you see colons, fudge it so it works'. I realise in your later discussion that there isn't a nice way to fudge it from the user side, either. I'll have a play around--backslash to escape is probably a good answer, but I'll see. >>19437 Sure! >>19440 You should be able to do tags and URLs atm. The pipeline can take from multiple inputs and send to one output, so maybe you are looking at something like pic related? You might be able to merge the two tag things into one, like my second pic related. (The 'conversion' here might be a string converter that prepends 'character:' on what it is parsing). To switch to URLs, click 'a file's tags' button in the 'destination' box of the router editor dialog. It'll flip over to URLs. I expect to make that a dropdown when I add timestamps, ratings, and whatever else. Does this help, or are you trying something like this and it just doesn't work? >>19441 If you don't mind a bullshit hacky way, try clicking the cog icon on the gallery downloader page and hit 'bundle multiple pasted queries into one importer'. That'll merge everything into one ugly mega-downloader, but it'll only work on things one at a time. Have a play and don't overload it. Otherwise, on that same cog icon, hit the 'start new importers search paused', and then babysit your downloads. I do this myself sometimes. >>19442 Thanks--I think this is a bug in Windows Explorer, maybe related to opening on a directory with many thousands of files. I think it just like misses its 'now time to select' window when the window takes more than a second to loads its contents. Either way, I just send: explorer /select path And sometimes it works first time, the rest it usually works second time.
>>19447 Thank you, this is really useful. It sounds like I should add some EXPERIMENTAL/debug checkboxes around this stuff so we can test out different initialisation and migratioun logic and figure out what works best for you. I feel fairly confident I can replicate some of what you are describing here automatically. As background, due to mpv-in-Qt being a crashy mess at the best of times, I never destroy an mpv window (previously, it would crash the program immediately), so instead, when I am done with mpv, the embedded mpv window handle gets set to 'no media', hidden, and reparented to the main window, and goes into a pool of available mpv windows. Any time a media viewer needs to show mpv, it recycles from the pool. When you flick from preview viewer to media viewer, there's usually a frame or so when both windows are using mpv, so you generally end up with two mpv players in a normal client, and flicking between preview and media view tends to flick between the two players. This sounds very related to your experience here. I'll see what I can do, so once it is ready, please have a play, bear with the crashes, and let me know what works. Thanks again.
>>19448 >JSON sidecar stuff Yeah, I just wasn't looking in the right place. Thanks! I think I'll hold off on it until I can also assign source time from json. I have plenty of files that need tagging in the meantime... Was the ability to save and reuse json importers added recently or am I misremembering?
I got annoyed by the volume jumping up and down when I swap from preview to full view. I went into the options for audio and unchecked the setting that says the preview winder gets it's own volume slider. This has the effect of making so the slider on the preview window no longer effects the audio. This is very odd to me. Wouldn't it make more sense for that option to make it so that the preview window volume and the full view volume sliders are just one master audio slider? On version 518
>>19446 >What would you like to happen? >Maybe for videos, the max time should be the max of that value and the actual video duration? >And maybe I could pause the viewtime if you are looking at video that can be paused. That is exactly what I was going to suggest. The cap increases if the media you're looking at is longer than the cap, and the time pauses if you pause the media.
>>19449 >>19441 Thanks! That's what I was looking for!
I use the AUR for Hydrus. Where are Hydrus's install files, database, etc. located in the OS?
For some reason I can't download anymore and I am stuck in "Waiting for work slot". What do? Pretty please
>>19454 It's in the PKGBUILD. You do read the PKGBUILDs of every AUR package you install, don't you, Arch user?
I just switched to btrfs. Is it ok to use Hydrus with CoW?
Whoops, didn't mean to reply. It's a red letter day, I've just seen an .avif file in the wild (in a 4chan thread discussing jxl). Forgive me if you've answered this already hydev, but have you put any thought into supporting AVIF in Hydrus? I think I saw you mention JPEG XL once but I don't know about AVIF.
(19.40 KB 619x426 Capture.PNG)

I'm making a parser that grabs some text to put in a note. However, the text has some weird characters like these: https://www.compart.com/en/unicode/U+2014 https://www.compart.com/en/unicode/U+2019 https://www.compart.com/en/unicode/U+201C https://www.compart.com/en/unicode/U+201D which when parsed show up like pic related. Is there some way to fix this? Can I tell the parser to use a different encoding or something? Also, dunno if it matters, but I'm grabbing the text from a JSON.
I had a good week. I cleaned some code and fixed a bunch of small issues in tag parsing, tag display, file import options, file delete reasons, and file viewing statistics. The release should be as normal tomorrow.
(32.41 KB 158x200 zip.png)

Does anyone put archives of images in their Hydrus install? What do you use to view them if so? I recently came across an image viewer called pqiv which also supports viewing images and gifs inside of archives, pretty cool. I'm wondering if something like it could help with archives in Hydrus, setting it as an external program for them would at least let me easily view them after importing. I've got some doujins just kind of sitting in my downloads folder and it would be nice to get them into my client, though they would lack thumbnails.
>>19461 >What do you use to view them if so? CBR. >though they would lack thumbnails. I've asked Hydev to implement this. It's on his to do list.
https://www.youtube.com/watch?v=F7gXLqMe5Bw Sorry these links were for v520 by accident earlier! windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v521/Hydrus.Network.521.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v521/Hydrus.Network.521.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v521/Hydrus.Network.521.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v521/Hydrus.Network.521.-.Linux.-.Executable.tar.gz I had a good week mostly cleaning things and fixing some unusual issues. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html highlights Building on last week's custom 'sibling' connection, options->tag presentation now lets you edit the 'OR' connector too, and you can now set a specific colour for them both using the existing namespace colours. If you don't like file viewing statistics tracking your viewing in the archive/delete filter, you can now turn it off. Also, when you are looking at media with duration, the 'max' time it can record for that video or audio is now the larger of your 'max' setting and five times' that duration. So, if you are capped to ten minutes max normally, but you loop or skip around on a twenty minute vid for thirty mins, it'll save the full thirty mins. I'd like to add some pause tech in future too, to account for minimising and pausing media with duration. Thanks to the user who provided feedback on this stuff. The file delete dialog should be better, in simple and advanced mode, at not overwriting existing delete reasons, which happens most when you manually delete from the trash. If you use this stuff a lot, let me know how it goes. The rest is a bit advanced and niche. Hydrus parsers can now support unnamespaced tags with colons in, essentially an extension of the existing ':p' tags, and users can enter these tags too by starting a tag with a colon. If you make parsers, check out the changelog for how tag content parsers can now support 'any' vs 'unamespaced' namespace. I also fixed some weird file domain handling in importers. next week I really didn't have time for everything I wanted this week, so I'd like to figure out custom/remembered 'reasons' for the petition workflows, more file viewing stuff, http header editing in the API, and maybe notes support in the sidecar system.
Edited last time by hydrus_dev on 03/25/2023 (Sat) 18:27:07.
Hey hydev, would it be possible to have an option to send a tag/ selection of tags directly to a duplicate processing page? e.g. select "creator:sir-mix-a-lot" and "series:baby's got back" -> right click -> search -> open in duplicate processor?
I know it's probably asking for a lot, but I wish there was an option to "ignore" an item from the blacklist if it's explicitely requested in a search / subscription. For example, my general blacklist includes "pokemon" because 99% of the time I dislike the content, but for a specific artist I want to search for "artist pokemon" because I appreciate their art even on pokemon content. I know I can already do this by turning my blacklist off or switching to a new one, and I know it's kind of an edge case.
>>19463 The files linked are versioned 520.
I have a terrible, terrible bug report and I'm sorry about it. It's terrible because I can't provide a good way to reproduce it; it just happens, sometimes. I have nothing to show, not even logs (because they don't mention anything about this). When this bug occurs, 1/2 video won't have mouse controls working (no click to pause, no middle click to quit that view, no custom shortcuts...), but escape still works. Scroll to get to another image/video works, too. Everything that's on the space around the video works. I'm not saying 50%; I'm saying 1/2 (as in "even/odd"). If I see this issue on a video and press escape to quit, then re-open that video, mouse controls would work. Then esc and re-opening that same video, mouse controls will not be registered again. Note that in this case, the mini-player under the tags counts as a video; if it's the 1/2 videos, I can't double click to open fullscreen, for example. It seems like the "counter" is 1/2 *media*, in general; changing to a picture then back to a "buggy" video won't restore shortcuts on the video, because it's back to the same 1/2. But pictures will never have a shortcut problem. If I see this issue, it will occur again on different pages, or even if I open a completely new tab, but the "counter" is per-tab, meaning that I can have (or not have) this issue several times in a row on different tabs, but they'll still see the odd/even thing within a tab. If I restart Hydrus (so same saved session and everything), the problem goes away completely. Until it randomly comes back, of course. I'd say this happens on every 1/15 sessions, because I open Hydrus once every day and feel like this happens twice a months, but I'm not sure; I can't say that there's any regularity. I'm currently running 521 (but this has been happening for a very long time, can't tell you when exactly, sorry), with Python 3.9.16 on Linux from source (but I also saw it happen on Linux binary). ffmpeg is 5.0.2, and my mpv API version is currently 1.109. I'm sorry I don't have more to show about this. Again, it's not really a big deal because I can just restart my client, so don't feel bad if my "randomly odd videos won't work" bug report doesn't ring any bells. Thanks for your hard work, as always!
(4.53 KB 163x136 24-01:21:26.png)

I'm getting "TagSizeException" errors on an image that has just... a single colon as one of the tags. I think it's due to the new "::" behavior. It's literally just a `:'. Traceback attached v521, linux, source DBException TagSizeException: "::" tag seems not valid--when cleaned, it ends up with zero size! Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/ClientGUITagSuggestions.py", line 468, in do_it ( num_done, num_to_do, num_skipped, predicates ) = HG.client_controller.Read( File "/opt/hydrus/hydrus/core/HydrusController.py", line 684, in Read return self._Read( action, *args, **kwargs ) File "/opt/hydrus/hydrus/core/HydrusController.py", line 200, in _Read result = self.db.Read( action, *args, **kwargs ) File "/opt/hydrus/hydrus/core/HydrusDB.py", line 919, in Read return job.GetResult() File "/opt/hydrus/hydrus/core/HydrusData.py", line 2069, in GetResult raise e hydrus.core.HydrusExceptions.DBException: TagSizeException: "::" tag seems not valid--when cleaned, it ends up with zero size! Database Traceback (most recent call last): File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 346, in GetTagId HydrusTags.CheckTagNotEmpty( clean_tag ) File "/opt/hydrus/hydrus/core/HydrusTags.py", line 186, in CheckTagNotEmpty raise HydrusExceptions.TagSizeException( 'Received a zero-length tag!' ) hydrus.core.HydrusExceptions.TagSizeException: Received a zero-length tag! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusDB.py", line 602, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "/opt/hydrus/hydrus/client/db/ClientDB.py", line 6353, in _Read elif action == 'related_tags': result = self._GetRelatedTags( *args, **kwargs ) File "/opt/hydrus/hydrus/client/db/ClientDB.py", line 3793, in _GetRelatedTags search_tag_ids_to_search_tags = self.modules_tags_local_cache.GetTagIdsToTags( tags = search_tags ) File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 377, in GetTagIdsToTags tag_ids_to_tags = { self.GetTagId( tag ) : tag for tag in tags } File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 377, in <dictcomp> tag_ids_to_tags = { self.GetTagId( tag ) : tag for tag in tags } File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 350, in GetTagId raise HydrusExceptions.TagSizeException( '"{}" tag seems not valid--when cleaned, it ends up with zero size!'.format( tag ) ) hydrus.core.HydrusExceptions.TagSizeException: "::" tag seems not valid--when cleaned, it ends up with zero size! Database Traceback (most recent call last): File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 346, in GetTagId HydrusTags.CheckTagNotEmpty( clean_tag ) File "/opt/hydrus/hydrus/core/HydrusTags.py", line 186, in CheckTagNotEmpty
[Expand Post] raise HydrusExceptions.TagSizeException( 'Received a zero-length tag!' ) hydrus.core.HydrusExceptions.TagSizeException: Received a zero-length tag! During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusDB.py", line 602, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "/opt/hydrus/hydrus/client/db/ClientDB.py", line 6353, in _Read elif action == 'related_tags': result = self._GetRelatedTags( *args, **kwargs ) File "/opt/hydrus/hydrus/client/db/ClientDB.py", line 3793, in _GetRelatedTags search_tag_ids_to_search_tags = self.modules_tags_local_cache.GetTagIdsToTags( tags = search_tags ) File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 377, in GetTagIdsToTags tag_ids_to_tags = { self.GetTagId( tag ) : tag for tag in tags } File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 377, in <dictcomp> tag_ids_to_tags = { self.GetTagId( tag ) : tag for tag in tags } File "/opt/hydrus/hydrus/client/db/ClientDBDefinitionsCache.py", line 350, in GetTagId raise HydrusExceptions.TagSizeException( '"{}" tag seems not valid--when cleaned, it ends up with zero size!'.format( tag ) ) hydrus.core.HydrusExceptions.TagSizeException: "::" tag seems not valid--when cleaned, it ends up with zero size! In addition I've noticed some strangeness with emoticon style tags, see my pic. I'm not sure if the &gt;:* tags are parse errors or tag presentation errors, or tag storage errors but I thought it may be worth bringing to attention.
>>19465 Just Whitelist the specific artist. At least that's how I think the Whitelist works. Anything with that tag will be downloaded regardless of the blacklist.
>>19466 Lol! You're right!
>>19456 I don't see any of my files where it's pointing to.
I know there's been some talk of a "difference" viewer in the context of duplicate images. This may be of interest, imagemagick can make these on the fly. >magick compare -metric ae file1.jpg file2.jpg diff.jpg (or /dev/null for just the metric) Pic 1 is from comparing pic 2 and 3, it also prints a number of how different they are.
Hey hydev, have you heard of the new parallel capabilities expected to land in Python 3.12? Nothing's concrete yet but I'm excited for the possibilities. https://github.com/faster-cpython/ideas/wiki/Python-3.12-Goals#multi-threaded-parallelism
>>19467 >video working properly half the time I wonder if it has anything to do with this: >>19447 I've had something similar happen to me as well with the mouse controls, also in Linux from source.
Would it be possible to add a select->files WITHOUT "tag" in the selection tags dialog? ATM when I tag ratings I have to tag all of a certain tag after selecting all files with them from the selection tags dialog and then select "not selected" on one of the files with the corresponding tag I chose first. If there's another easier way to this instead that'll be cool too.
>>19448 Is it possible to add URLs based on filename/path only? The "add tags/urls with import" window looks like it can only do tags based on filename. Say I download something from kemono party with gallery-dl, and it has the service/user ID/post ID in the path. Could I extract those and put them together into a URL when importing the folder manually (no JSON)?
>>19450 >Was the ability to save and reuse json importers added recently or am I misremembering? Yeah, when you see the list of importers, there should be export/import/duplicate buttons. You have to copy/paste to a notepad or something for now, but I'd like to add a star button too, for a 'favourites' system so you can properly load/save common stuff. >>19451 Thanks, I'll fix this! I think what's happening is the UI isn't updating itself when the options change. Seems to work ok after a client restart for now, but I'll figure out what I have to here. Let me know if you run into any more trouble. >>19454 I don't know much about the AUR package, but when hydrus runs, hit up file->open and you can get to the install and database dir. Everything is under those two. >>19455 tl;dr: if you have other downloaders working, wait. if you have no other downloaders working, restart the client. Because of some bad decisions when I first made the downloader, I have to bottleneck the number of 'actually running' downloaders at any one time. I think the limit is about five or ten file downloaders and five or ten gallery searchers. Those are the ones you see listed as 'working'. Everything else that has work to do will be 'pending' until the 'working' have cleared. If you don't have any 'working', the system may have got fucked up, so try restarting the client. Sorry for the trouble, the next big rewrite of the downloader will not have this problem, and it should support hundreds of downloaders working and giving good status updates at once. >>19457 I don't have any expertise in this, but I think this is the article you want: https://wiki.tnonline.net/w/Blog/SQLite_Performance_on_Btrfs I use WAL, so the answer seems like yes it is ok!? If you try it, let me know how it goes.
>>19458 Same answer as for other formats: I'm broadly agnostic in the debates over which is better, so I am willing to support anything that isn't a meme. My main problem is I don't have time to write an image renderer, so there needs to be easy realiable multiplatform support for the thing. Practically, that means 'if OpenCV or PIL add it, I can add it in a day'. I have heard ffmpeg can do some weird image formats, so I'll probably write a pipeline that can get that to do pictures too, so I'll soon be able to add that to the list. I'm also thinking of doing the same thing for ImageMagick, which will give us all sorts of cool tech for PSD et al. I believe a jpegxl plugin is being tested for PIL right now, and is available on some github repo somewhere for preview. That's too early for me to integrate it properly I think, but I assume it will be rolled in to the main package in the coming year or so, unless the corpos really succeed at killing the format. Avif is the same deal--if you spot support being added somewhere big and real, let me know and I'll check it out. >>19459 Can you send me an example link that produces this, so I can play around with it? I'm not totally sure, but I think JSON is supposed to always be utf-8, by definition, and ever since python 3 we haven't really had these unicode conversion problems. Sounds like I am messing up somewhere in note handling code. If I put that text into an example parser, pic related, it seems to go ok at that stage. When you are looking in the parser edit windows, do things render ok there but fail once it actually parses to a note for real, or does it fail at every stage for you? If it is the latter, it might be the actual site is providing a borked content-type header or something with the json file and that is messing things up on my end, so the actual URL would help, if it isn't private. >>19461 For doujin cbrs that have nice paged content in them, I recommend ComicRack! I'd love for hydrus to be good at this stuff, and I have a lot of plans, but it absolutely is not there yet. >>19464 Sure, I'll see what I can do!
>>19465 I'm afraid when the blacklist applies, that's a long way from when the initial search happens, and the file import job just doesn't know what words initially created it. I may link these things up one day, but for now I'm afraid it would be super awkward. What I can do, which I hope will help, is figuring out some sort of 'favourites' system for the file, tag, and note import options, so you can save an explicit 'pokemon-ok artist' tag import options and load it up as needed in three clicks instead of having to dig into the tag filter edit window and all that shit. I don't like how the various import options are 'completely customise everything or load up the default', I want a flexible middle ground. >>19466 Thank you! I messed it up everywhere, turns out. I think it happened because I have been thinking about updating my script here last Wed, and I was looking at it, thinking how to make it work better, and must have then just forgot to run it with the new number. Funny, because I have an explicit check every week to make sure I copied an updated URL list into my post template with the right numbers and platforms, and I completely glazed over the number. >>19467 Thank you, this is very interesting! As >>19474 says, this is closely related to >>19447 . What's happening is one of your mpv vids is not initialising with mouse/keyboard events or something in Qt, and then I am loading that vid window to you half the time. Since you have mpv 1.109, I was talking to an Arch dude today who had libmpv2. MPV have been rolling out version 2, which is a big change, to lots of places recently, so I'd say your first and best bet is to see if you can update. It might still be called libmpv1, or libmpv, or libmpv2 in your package manager. If you can get it, try it out, and then you may need to reinstall your hydrus venv and ensure you get a newer version of python-mpv, which my venv setup script should offer a choice about. Failing that for whatever reason, it is a shame this is 1/15 chance since it makes repeated testing tricky, but I would be interested to know if you can cause this more often in different mpv-creation situations. When it happens again, I would like to know if you recognise it initially happening (i.e. first instance you notice of it breaking) in the preview viewer or the media viewer. Maybe it always breaks when the second mpv window is created for the media viewer, or maybe it is the initial one made for the preview viewer that breaks. As I said in my above post, the client usually has two mpvs, but you can force it to have more by leaving a media viewer open and then going back to the main gui. Open two media viewers and then click on a thumbnail, now you have three mpvs open. Start the program and middle-click on a video without selecting it, you'll create your first mpv window in the media viewer, not the preview viewer. Might be you can learn some more info playing around with that. Also, when it breaks again, maybe you can fiddle with things and put the broken mpv at the 'back of the pool' by opening three mpv viewers at once and then closing the broken media viewer first. I'm still planning to add some mpv debug options soon for the guy above, so you might like to keep a watch for them too. Let me know how you get on!
>>19468 Shit, thank you! I'll fix this problem with the '::'. The &gt; isn't necessarily my end (it isn't a storage/display issue), but I think I can provide some parsing tools to help fix it, if I don't already. &gt; is an html encoding thing, but it doesn't come up much because afaik my html parsing library, bs4, sorts it all out for me--do you happen to know where these tags came from, can you give me some URLs so I can test on my end? It might ultimately be a job for tag siblings though. I know the PTR jannies are looking at undoing a 'face:' namespace that was needed to render '>:|' tags, since before you needed the '>' namespace, and now you can render it as ':>:|', so there might be a couple more weird bumps revealed here. >>19472 Yo, this is awesome! I have wanted this sort of comparison tech for a while, and I've been planning ImageMagick integration, so this could be great. I'll save this for later. >>19473 This is the first I've seen. I am very glad they have figured out the GIL--it has always been a pain. It will be fun in a few years as various hacky multithreaded scripts across the science and Ops world get rolled into py 3.12 and suddenly eat 100% CPU instead of 12.5%, ha ha ha. I've been adding support for py 3.11 just recently, although I'm not sure I'm done. I shouldn't think I'll be releasing the hydrus builds in 3.12 for another, what, three years probably, but if this is truly plug and play I'll be really interested to hear how people get on. The main bottlenecks in hydrus are still my shit code, but when you need to generate phashes on parallel import or render previous/next images with OpenCV on a thread, this could go bonkers. >>19475 Sure, interesting idea! >>19476 That's a good idea, and I think I'd like to add something like this eventually. My current push is on the sidecars system, which is in the first tab of that window. I'd like to add notes to it this week. As that system matures, and I get more nice clear pipelines for setting notes, ratings, modify times, whatever, I want to completely overhaul the 'filenames to tags' panels so they use all the String Converter parsing tech I've added to the downloader system over the years, replacing all the hardcoded regex crap I have currently, and then, as you say, I could figure out 'parse this number, add the URL prefix to it, then send that string to known URLs'. It'll take some time unfortunately, but don't let me forget. I really do want to overhaul filename tagging to something more flexible--while also preserving the easy checkboxes so you can say 'add second directory as tag' etc...
>>19480 Oh, should have mentioned for the imagemagick command, you need to use NUL in place of /dev/null on Windows.
(51.14 KB 1067x800 Capture1.PNG)

(49.96 KB 1067x803 Capture2.PNG)

>>19478 >If I put that text into an example parser, pic related, it seems to go ok at that stage. When you are looking in the parser edit windows, do things render ok there but fail once it actually parses to a note for real, or does it fail at every stage for you? I think I've discovered the problem. The JSON formula is within a COMPOUND formula. It looks fine in the JSON formula (pic related 1) but at the COMPOUND formula level (pic related 2), it's messed up. Some kind of issue with going between JSON and COMPOUND? >Can you send me an example link that produces this, so I can play around with it? Full disclosure: I'm the deviant art guy again. I've recently learned that stories on deviant art that are locked behind a log in screen actually have their text available in the html json. So I was trying out parsing it. Of course, there's obviously no image to be downloaded along with the text, so this note parsing is purely hypothetical - there's no file to attach it to.
Is it possible to delete unused tags? When I'm tagging stuff, typos I've made show up in the suggestions even when they're not attached to any files (for example, 1girls shows up when I type 1girl).
Hydrus shredded my soundposts, which came at a complete surprise to me since the documentation only said that >The client discards [...] filenames Jokes aside, has someone come up with a solution for this?
Not sure if this is a common request, but are there plans for an option to extract and import the contents of downloaded archive files? This would help with sites like kemono party which often have a few preview images and an archive in a post. You can do it manually now, but it's a pain and you also have to manually enter urls for the extracted files.
Is there a way to make one subscription to have higher priority than another? I'm trying to make certain subscriptions always start before others.
Can Mr. Bones give statistics on each file domain? Right now he just shows All My Files (Archive/Inbox) and Deleted.
Is there a way to go from e.g. folder_1:string to character:string for selected files? I don't know if there's a way to completely rename namespaces but that wouldn't work either because folder_1 doesn't necessarily mean character in all cases
I had a good week working on some slightly advanced, more technical things. I added http header management to the Client API, added file notes to import and export sidecars, and fixed some bugs. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=2AjrLci_AL8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v522/Hydrus.Network.522.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v522/Hydrus.Network.522.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v522/Hydrus.Network.522.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v522/Hydrus.Network.522.-.Linux.-.Executable.tar.gz I had a good week mostly working on technical things for advanced users. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html notes in sidecars You can now import and export file notes with sidecars! There's a technical compromise I had to make here, which is that a note in a sidecar comes in the form 'name: text', both for import and export. If you want to import a whole bunch of notes, you'll need to wangle them into this format, or, if you can, use string processors to get them into that format. There's an escape sequence, ':\ ', for if your name has a colon in, too. Have a play with it and you'll see how it works. The changelog and a new section in the sidecars help talks about it more, including issues about multi-paragraph notes. I suspect I'll be revisiting this, so let me know how it works for you! neat user submission This is advanced. A user contributed two cool new things for the parsing system: First, there's a new content parser type, 'http headers', which lets you parse something to be submitted in all subsequent URLs created by the current job. It should let us figure out some unusual tokens and pseudo-login issues. Second, there's a new String Conversion type that lets you calculate the hex hash of any text for the normal hydrus suite of hashes--md5, sha1, sha256, sha512. more http header stuff This is advanced. On my end, I added full custom http header management to the Client API this week. Basically anything global/domain related that you see under network->data->review http headers can be done via API now. This has been a long time coming, and I am glad it is finally out the door. It should allow for setting up some complex site access via the API. next week Next week is a 'medium size job' week. I'm not sure what I want to do. I've been thinking of a complete pass to let us edit timestamps, but there's a million other things in my list, so I'll think about it. I'm still a little busy with some IRL stuff, so it might end up being a two-week release.
>>19490 >The changelog and a new section in the sidecars help talks about it more, including issues about multi-paragraph notes. I suspect I'll be revisiting this, so let me know how it works for you! Really appreciated the effort and I hope soon you will come out with the multi-line notes. As a prototype it looks promising. I don't know shit about Python but in C++ the command to break lines is "<< endl" located in the Standard Library. Granted, this is for the output stream and what you need is for the input stream. So, it is just an idea.
ability to have a hydrus db that is only symlinks and that leaves my media in its place when? czkawka doesn't really take that long to run and having a scan performed once a file is found out of place first via file name and then again via hash shouldn't be that hard if hydrus stored in its db the hash of all the symlink targets for the purpose of repairing broken links. >do it yourself i can't i'm an idiot that barely knows python let alone how to take on a project as huge as this
How do i automatically add a tag to everything downloaded from a certain site? I want to download posts from e6AI, but i want them to be tagged "ai_generated" because i don't want to mix up AI generated stuff with everything else.
>>19493 Did you make a downloader? Mind sharing it? Assuming you did make a downloader for that site, you should be able to do something like "network > downloaders > manage default import options > e6ai file page", change to custom tag import options, then add "ai generated" or what have you under additional tags.
any updates on fixing tumblr parsing? almost every file i try to grab from tumblr (whether subdomain or subdomain) 404s because tumblr does some sneaky shit in which it redirects you to a webpage with the image on it instead of the image itself. thanks in advance
>>19495 meant to say subdomain or base tumblr domain, sorry
In V522, it seems that sending a file to the trash from the media page, doesn't set a deletion reason, even when you click to set one. It just lists it as "Deleted from media page". I tried undeleting and redeleating a file that was deleted in the archive/delete filter, and it kept the "deleted from archive/delete filter" reason, even though I did try to set one when I sent it to the trash again. Something's keeping the reason from actually being set.
>>19497 I just tested in the duplicate filter. It also doesn't properly set the reason there when I delete using the dialog box. It just uses the default reason no matter what, as if I'm not even using the advanced deletion dialog.
>>19482 Great, thanks, I'll figure it out! >>19483 I recently made it so more suggestions turn up in autocomplete, and this has exposed that my tag search caches keep some cruft (with count 0) they aren't supposed to. I'm going to look at this problem, since they are supposed to already be 'deleted'. >>19484 Sorry, not sure I understand. >>19485 Yeah, I'd love to have better archive tech. I want browseable CBR/CBZ support for comics, and for that I'm going to have to write some 'look inside this archive' tech. Then I'll be able to add right-click options like 'extract this archive and import all its contents' along perhaps with some dupe tech to, like in your situation, copy urls to all the importees etc.... This will be a big job though, so I can't say when it will happen. >>19486 Try options->downloading, you can set so subscriptions always go in name order (vs random order). Then try naming your subs '01 - cool babes', '02 - sonic fanart' etc... Let me know if it doesn't work nicely.
>>19485 I would love for this to be automated so much. Especially if it can automatically do things like set the filename tag of the archive as the title tag of the contents and things like that. Dealing with importing the contents of archive files in hydrus is one of the biggest pain points for me currently.
>>19487 Yeah I want to hang a normal search context off Mr Bones and the File History chart. Then you'll be able to do their stats on any search you can think of. Not sure when it will happen, but I keep wanting to do it myself, so it won't get forgotten! >>19488 Not yet, but I have a couple plans to add it in one way or another. I do want full 'namespace sibling' support for complete renames, but your situation keeps coming up too, where it would just be nice to have a quick rename job on a subset. I think I might integrate a regex tag converter into tags->tag migration or something similar. Do you have any ideas how you would like this to work? What's the workflow in terms of clicks, keypresses, and UI? >>19491 Ah, the problem with the multi-line notes is that .txt sidecars in hydrus use the newline character as the 'separator', so for tags you might have a document that looks like this: skirt bikini blue eyes Which works for single-line content, but if your content validly contains newline characters, then where's the break? Either you can pick a crazy separator character/string like '||||', or just use JSON, where the newlines are safely encoded and it isn't an issue. It is like using commas in CSV content. I guess there's a way to do it, making them backslash-escaped or something, but it just provides an extra layer of trickery, and I threw this together, hacking multi-line content into a system designed for smaller simple strings like tags and URLs. Same deal as the 'name: text' solution. If you end up using this, let me know what does and doesn't work, and how you'd like it to work in future! >>19492 Unfortunately, I think the answer is most likely 'never'. I vacillate on the idea, but in the end, the use case is so small vs the increase in complexity that I just will never get around to it. There's also a philosophical component to it--if you have too many files to manage via filenames, then why keep filenames? If you only need it for a few critical files, then have two copies (export folders can also help here). Here's my longer related thoughts: https://hydrusnetwork.github.io/hydrus/faq.html#filenames https://hydrusnetwork.github.io/hydrus/faq.html#external_files But I'm open to suggestion. There's a bunch of situations, like paged comics, where hydrus's storage sucks for any external program to use. Can you talk about your workflow, where it helps to keep filenames/original locations? How many files are you tracking?
>>19495 >>19496 I'm afraid I haven't worked on it. If I remember right, tumblr's open API is limited, so some things were just a pain in the ass to fix. Can you give me some example URLs so I can test my end? >>19497 >>19498 Thank you, someone else said similar; something is busted. I will check it out for v523. I'm sorry for the trouble--I think it has been happening since v521, when I 'fixed' file deletion reasons overwriting too much. That whole dialog is shit behind the scenes, I need to overhaul it. >>19500 Thanks, great idea.
>>19495 use gallery-dl
>>19501 >Can you talk about your workflow, where it helps to keep filenames/original locations? Different anon here. I'm toying with the idea of expand my storage capacity with a second HD in order to keep a second set of files (mirroring the ones stored in Hydrus). The reason is that folders has the ideal functionality of "discovery" and just with a quick view I can see all topics (subfolders) stored in a given folder and remind me of old items forgotten inside. With Hydrus nothing of that is possible and a lot of items will remain forever buried and impossible to find because of inadequate tagging. Take in count that tagging is very time expensive and most of the time most items get into the database without a thorough tagging process, and with only 1 or 2 tags denoting a general category, this flawed and insufficient method is supplemented by entering the file name as a Tag which helps a lot to zeroing on what I'm looking for. A particular characteristic of my workflow is that Hydrus is used 100% off-line and all files are processed manually, one by one, with a ton of PDFs and HTMLs in the pipeline (HTMLs are stored by means of screenshots while the originals are kept outside Hydrus).
Made a parser for deviant art's sta.sh domain. Technical notes: -"post page can produce multiple files" is checked because sta.sh urls can lead to single files or to folders. Examples: (NSFW) https://sta.sh/012cool3spj1 https://sta.sh/21mzfoqarjo6 -All the sta.sh urls that I could find always have an original size deviant art url available. I'd be curious to see one that only has a resize available.
>>19505 I've just realized that every sta.sh folder I could find has a 2 at the start of the id, and every file has a 0 at the start of the id. I wonder if that's always true, and I wonder if other numbers have other meanings...
I have two Hydrus databases, one of which works like it always has, but one of them now gets write locked on boot. The tooltip says "maintain_hashed_serialisables" -- do you know why that might be?
Anyone working on a tagger for Hydrus that uses this? https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags I know there is hydrus-dd, but this model is so much better at detecting parts of a picture, and especially at identifying loli. It would be so great to be able to hook into this with hydrus, and have it scan all the pics in the database, tagging them. If someone does have the skill, and decides to make something for this, please also give the option to keep the original tags in the database, only adding to them. Thanks!!
>>19499 About >>19484 (me) Soundposts are a form of embedding sound with an image, webm, gif etc. it's a workaround for boards that don't support audio. Requires the user to install a userscript like https://greasyfork.org/en/scripts/402682-4chan-sounds-player, the audio file link is part of the filename. Since Hydrus removes filenames, the soundposts are just images now. I was wondering whether there's a way to preserve it.
>>19509 You should be able to tick a box when importing to add the filename to a "filename:" namespace. You could probably also do some regex to add just the link as a separate namespace, but I don't think you can go from filename -> URL at the moment. Reimporting a file doesn't overwrite anything, just adds on to it, so you can reimport the file with the filename box ticked and it should just add the filename.
(58.06 KB 1280x720 idea.jpg)

>>19504 >The reason is that folders has the ideal functionality of "discovery" and just with a quick view I can see all topics (subfolders) stored in a given folder and remind me of old items forgotten inside. I'm thinking how that might be translated into hydrus without to annoy devanon with symlinks, and it might be as follows: To show some namespaces in the thumbnails main panel with a folder icon. These namespaces would be the top parents and clicking on these folder icons should show the children namespaces (or folder icons representing them); like a hierarchical folder paradigm but pointing at namespaces instead of real folders. In this way there is a actual way to find ALL files following the hierarchical bread crumbs. Another thing I believe would be super based if implemented, is to drag and drop the imported files' thumbnails into those namespaces (aka folders) for a fast and practical tagging methodology. Then a typical Hydrus search could be for example to switch to folder view to show main topics in a semi-hierarchical sorting and from there digging for subtopics. Given that children can have more than a parent and vice-versa the possibilities for different searching paths are a lot.
>>19510 Understood, I'll try that. Thank you!
>>19510 >>19512 note that hydrus fully lowercases everything so if your soundpost link is case-sensitive it won't work.
>>19365 this is still an issue. Any update? my subscriptions still show archived files, even though I set quiet contexts to only show inboxed files.
I've tried reading previous posts to find an answer but either I'm not looking for it right or in the right place. I want to set up a peer to peer connection with a friend where they can access and edit my Hydrus database from their own Hydrus install on their computer. I check the server page on the main website but all it provided was just an app or a browser solution. I just need my friend to be able to get on Hydrus, add/delete/edit images, with any of those changes be synced to my end and vice versa when I make changes.
>>19510 All right, the filenames have made it into the tag namespace, and I figured out how to have the content of the namespace used as filename via the export dialogue. Is there a way to make it a permanent setting? Like, if I drag a file out and it has a tag of the filename namespace, use that, else use hash? I would prefer to not have to use the export dialogue every time.
I've been head-down on a rewrite of the timestamp management system (for editing import, archived, modified etc.. times), and while the work has gone well, I am not done yet. I only have some simple UI for editing a handful of timestamps, but I want full editing of all of them, and ideally sidecar and API support too. Therefore, there will be no release tomorrow--I'll work instead. v523 should be on the 12th of April. Thanks everyone!
>>19511 Basically this parent-children way of searching is already implemented, what is missing is the Qt sorcery to give the illusion of "folders" and a way to mark some namespaces as the upmost parents in order to using them as hierarchical entry points in any given search.
>>19518 >hierarchical entry points I'm thinking in ordered comic pages.
>>19513 I had forgotten about that. Is the lowercasing just of what is displayed, or are the tags strictly lowercase-only in the database?
Is there any way to perform 2 gallery searches, then only download the files that appear in both searches?
i wanted to set up a hydrus instance on a remote server and access it via the client api, but i ran into an issue: when i start up the "server" executable, only the admin service starts. i need the client api service, but the client executable wont run on my headless server. any way to get the client api service to start with the server?
>>19522 update: least retarded solution i found was to vnc into the app with >ssh -L 5900:localhost:5900 user@remote >QT_QPA_PLATFORM="vnc" ./client might be nice to add to the docs somewhere :)
>>19501 >if you have too many files to manage via filenames, then why keep filenames? Because filenames often contain useful information that is automatically shared with others about the file when you post it, such as a joke, artist, source material, character names, et cetera. Also, since many ordered sets of files are ordered by their filename, retaining the filenames as "filename:" tags by default and then sorting by the filename: namespace makes keeping sets ordered very simple. >>19504 >With Hydrus nothing of that is possible and a lot of items will remain forever buried and impossible to find because of inadequate tagging. Inadequate tagging of your personal files is your own fault. Embrace tagging autism. >folders has the ideal functionality of "discovery" and just with a quick view I can see all topics (subfolders) stored in a given folder This can easily be implement with a "related tags search" function that pulls up tags commonly associated with whatever tag or tags you've already entered. Related tags is already a function implement, but right now it's only used for entering new tags as far as I know. This idea should be fielded to Hydev if he hasn't already implemented since I last updated.
>>19504 >most of the time most items get into the database without a thorough tagging process, and with only 1 or 2 tags denoting a general category, this flawed and insufficient method is Equivalent to you not properly sorting by subfolder. If you're only putting a few general tags on something, you've pretty much already duplicated the utility of folders. It's up to you to go beyond that with personal effort unless you want to use the PTR.
If cbz/similar support gets added, will there be the ability to parse metadata (title, author, etc) from those files that have it? Apparently there are several ways to do it, some use specially formatted text files within the zip and others embed the metadata as a zip file comment.
Is there an easy way to run an external command on a file in the GUI? I'd like to be able to use some keybinding or menu option to run a command on the file(s) selected. My specific use case is in Qubes; each VM has its own storage, and there is a command (qvm-copy) that allows the user to copy specified files into another VM. I have an offline VM running Hydrus, and occasionally would like to copy images from it to other VMs. I don't want to ask for a Qubes-specific menu/keybind option, but since this is literally just running one command on a file/list of files I feel like it shouldn't be too much to ask for. Maybe big red text saying that you'll break shit if the command changes the file would be a good idea too. Kind of like "open in external program" except it uses a specified program no matter the file type (and doesn't interfere with the existing 'open in external program'). I tried setting the "open in web browser" custom command to `qvm-copy "%path"` but it does nothing. Setting default external program does work with qvm-copy though.
Dumb question about json sidecar parsing. I'm trying to parse something that has multiple fields for different kinds of tags. So there's something like tags_characters and tags_general. I can figure out how to get one set but not multiple. If I try one and then the other, I think it tries to look for the second inside the first instead of getting both from the same level and it shows 0 results. If I remember right for a downloader you would make it a compound and then do multiple json parsers, but I don't see a way to do that with sidecars. Anyone know how to do this?
I'm making a downloader for a poorly designed site. They have the file's modified date, but only as a string like "September 5, 2015". Hydrus doesn't like that, giving "source time: could not convert to integer". I tried some clunky regex to change that to 2015-09-05, but it still didn't work. I also tried appending 00:00:00 to no avail. Does it expect epoch seconds? It would be nice if there was a datetime string converter under "timestamp type", where you could put for example '%B %d, %Y' and have it be converted to whatever is used under the hood automatically. This site in particular has some dates that would break a simple setup like that though, they don't pad days of the month with zeroes and I've seen one where the space was missing between the comma and the year. Maybe regex in the time format would work, something like '%B %d,\s?%y'.
Can someone help me with hydrus companion setup. >Navigate to your client API under services > review services >Click add > manually on the client API Where is option "add" in client. I simply enter API key on compaion and it gives error >Something went wrong while querying Hydrus API permissions (HTTP status code: 403), check your settings and that Hydrus is running!
>>19524 >Inadequate tagging of your personal files is your own fault. Embrace tagging autism. Point taken if the day would have 72 hours or more, but with only 24 is totally impossible to tag all files. So, a different approach is due.
>>19524 >This can easily be implement with a "related tags search" function that pulls up tags commonly associated with whatever tag or tags you've already entered. And still a hierarchical order is missing. Look it this way, when using folders we follow bread crumbs (path) until we get the wanted file; also a very important implicit function of folders is to display subtopics (subfolders) and through them, to REMIND the user of the available choices from where to continue the search. In Hydrus, there is a left column with a lot of tags and namespaces but not a practical way to highlight and order them at the top of that column, and I mean the namespaces marked as the upmost parent, which should be the natural "entry points" for searching.
>>19529 >It would be nice if there was a datetime string converter under "timestamp type", where you could put for example '%B %d, %Y' and have it be converted to whatever is used under the hood automatically. anon, i............. it's not under the "timestamp type" dropdown, it's an option in a string parser. >>19516 file > options > gui drag and drop export filename pattern
>>19533 Thanks, I hadn't thought to look under string converters for some reason.
>>19507 The hashed serialisables system handles your session pages, and that maintenance task basically goes through all your saved sessions to see which saved pages are no longer referred to by any of the sessions and deletes them. Normally it is pretty quick, but is there any chance you recently deleted a _lot_ of sessions, or some very large sessions might have rolled out of the ten-copies-deep session backup? It takes a little while to delete big data, and a download page with like 200,000 URLs in it could be 400MB. Sorry to say, but the best thing here is probably to just give it time. If it is doing CPU/HDD work in your Task Manager, let it go. If it isn't doing any work at all, or this is clearly in some kind of loop, then it sounds like there may be a bug here and I will be interested in knowing more details. It would be more appropriate to handle that one-on-one, so if it doesn't fix itself in a reasonable amount of time, please hit me up on email or discord. >>19508 I don't know the details, but I know some guys are working on this tech. I'm hoping it all works out neatly and there are public releases, but we'll see. The more the merrier, I think, and if anyone is working on this stuff and needs a little change to the API or something to help them, please let me know. >>19509 Ah right! Thanks. Yeah, this is a little tricky, particularly with the case sensitivity. Most of our imageboard parsers can parse filenames, but you have to wangle the default tag import options under network->downloaders->manage default import options for 'default for watchable urls'. But you'll run into the case sensitivity thing. >>19520 They are all coerced to lowercase in storage and display, there's no way to recover upper case characters. I've never really liked using tags for filenames, just like the 'title:' tag. Now we have the 'file notes' system, which supports any kind of unicode text you can think of, that may be an answer for you, and there's even note support in the parsing system, so you could probably build a parser that looked for soundpost-looking filenames and sent them to a note. But I don't expect you want to dive into the parsing system, and even once it is in the client, this stuff is technical and not so easy to use--I am still making notes nicer to work with (I added sidecar exporting just recently). If you really want to save your soundpost URLs, I think file notes are your best shot, but for now that stuff is best managed manually. Let me know how you get on. It may be best to just store/dupe these files outside of hydrus.
>>19511 >>19518 Yeah, I broadly agree with this. I'd like some nice UI workflow to make folders or booru pools or something like that possible. Folders are useful for certain tasks, and hydrus has been hyperfocused on single file units for too long. When I get around to the 'alternate file' file relationships expansion, I'm hoping to push in this direction. I'd love to say there will be nice mouse-based workflows for editing things, and I really would like better UI for managing file relationships (like dupes) in a more human way, but most of the work at this stage will unfortunately just be a mad scrabble to get the data side working. >>19514 Sorry, I looked at this and just could not figure out what was going on. All the default loud/quiet stuff seems hooked up correct to me. Sorry if this is a stupid question, but is there any chance that these bad subs have a non-default file import options set? If they have specific import options set, as in their edit panel, that'll overwrite whatever the defaults are, loud or quiet. >>19515 If you are talking about the hydrus file repository, when you say server, this may or may not be an appropriate document for you (it was written by some users who got asked a lot about this stuff): https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html There may be an API solution for you, but most of those only let you view what's in your client--the basic idea being looking at your stuff on your home PC when you are out with your phone--rather than letting others have full add/delete control of what files you have. I am planning on making clients talk to each other and letting kind of 'possess' another client, but there's no way yet I don't think. >>19521 I assume you can't combine both searches into one? Most boorus treat a multi-phrase search as an AND search, just like hydrus does. e.g. if you want all the samus aran drawn by splashbrush, you can enter 'samus_aran splashbrush' into a normal booru search and you'll get the intersection of those two searches. If you have more search terms than the site can take, you'd have to write an external script, I think. Run both searches with 'start new importers files paused' (check the cog icon button), then open up the 'file log' of each and ctrl+a->right-click->copy sources. Then have a python script or something that takes two lists of URLs and gives you the intersection back, which you can then paste into a new 'urls downloader' page. >>19522 >>19523 Thank you, I will add this to the help somewhere. I'm not sure if you were actually running the client or the server here, or at what stage, so I'll just spam this anyway, like to the guy above, in case it is appropriate: https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html I'm sorry for how stupid this is currently, and for how there is no headless client. There's a Docker package, btw, if that helps--I know a couple of guys who do this: https://github.com/hydrusnetwork/hydrus/pkgs/container/hydrus
>>19524 'related tags' for your current search page is a really interesting idea. I'll have a think about it. For filenames, I thought about adding a 'filenames' local tag service to all clients and just parsing filenames to it automatically, but ultimately I erred against it. I'm still vacillating at times, but, in general, if a user finds filenames important, they can set this up now. And most users do not want to keep this info. Maybe it would make more sense when I have more selective/filterable autocomplete tag results, since I'm pretty sure this spam would shit up most searches right now. >>19526 Sounds like a really nice idea, but I should think the first version of cbz will just be me trying to figure out the basic archive-inspection and multipage-viewer tech. Feels like my sidecars system could plug into this pretty well, though--especially if there is a standard to any of this. >>19527 Not yet, but I have big plans for this, so it will come eventually. The client will get an 'executable manager' in the future where you'll be able to define these for all sorts of stuff, including ffmpeg and waifu2x conversions. The same system could absolutely just do 'open the file in this' and hooked into the shortcut system. It'll be a lot of work, so I can't promise when it will happen. If you are ok hacking an ugly solution, set up your path in options->external programs as the 'open externally' action, and then just say 'open externally' on the files you want to do this on. >>19528 Does something like pic related work? For parsing from multiple locations, use multiple source sidecars. The second one here uses a String Converter to prepend 'character:' for instance. I don't think you need to do anything super clever. It could just be that my code is bugged, of course. Let me know if you are trying exactly this and it still doesn't work. If you can send me an example JSON and maybe the 'exported' version of the sidecar package too (check the export button on the UI), that'd be great.
>>19530 I think he means pic related. Add a new entry there and it'll generate a new api access key you can copy to the thing you want to use the api.
Can file relationships like alternates be manually set?
>>19539 select files > right click > manage > file relationships > set relationship if it's something you're going to be doing often, you probably want to set a shortcut. it's in the "media actions, either thumbnails or the viewer" section.
What's the fastest way to tag manually? I've imported a bunch of files, opened the archive/delete filter, opened the tag dialogue but I still have to unselect the tag window and select the viewer window to keep/delete the file, be it via mouse or other shortcut. Is there a built-in way of keeping/deleting a file while the tag dialogue remains selected? Not having to alt+tab or click would really speed it up.
I changed to a new PC and am trying to move my hydrus database/files to the new computer from backup. I want the files/database on a secondary drive. I followed the help page to migrate the database location to my secondary drive (https://hydrusnetwork.github.io/hydrus/database_migration.html#intro). But, now I can't import my backup because it says my "database is stored in multiple locations." But there's only 1 location in "migrate database" window - the new location on my secondary drive that I specified. How can I import my backup?
>>18976 I am a huge moron who is looking frankly to host his own booru like system for myself and some friends. Can hydrus help me do that easily?
>>19544 Hydrus can, but it's not really ideal for that. Read this: https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html Despite the name of the webpage, you might actually want the server.
>>19545 any recommendations for what I should be using instead? I thought of using suzubooru.
I had a great couple of weeks overhauling how the program manages file timestamps. Most of the work was boring behind the scenes stuff, but the upshot is a new dialog now lets you edit every file timestamp--import time, deletion time, archive time, modified time, and last viewed time, and I've added 'system:archived time' for searching. There's also an important bug fix for file deletion reasons and some quality of life improvements. The release should be as normal tomorrow.
Bit stupid question How do I update hydrus without breaking anything
>>19548 Follow this guide: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#updating The general gist is you are re-extracting or re-installing on top of the existing install. Since hydrus is hacky and technical, I also suggest people always always run a backup before doing an update or anything else big. Then, if something does go wrong, you have a very recent copy you can always roll back to. How to backup is in that same help page--basic idea is just copying your whole install dir to another drive somewhere.
https://www.youtube.com/watch?v=LGs_vGt0MY8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v523/Hydrus.Network.523.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v523/Hydrus.Network.523.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v523/Hydrus.Network.523.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v523/Hydrus.Network.523.-.Linux.-.Executable.tar.gz I had a great couple of weeks working on file timestamps. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html timestamps Unfortunately, although I did a ton of work here, not much of it is interesting or even visible! Essentially, two weeks ago, the different timestamps your files have--stuff like import time and archived time--were stored all over the place, and now they are stored together. When they are consulted, it now all happens down the same pathway, and I have written a proper update pipeline using this new system. The user upshot is that you can now right-click->manage->times on a file and edit all its timestamps. This means archived, imported, deleted, previously imported (for undelete), file modified, domain modified, and last viewed times. This new dialog is basic and only for single files for now, but we can now see these things any time and edit little problems. If you end up using this dialog a lot, there's a media shortcut action, 'manage file times' for it. I've also added 'system:archived' for the 'system:time' stub. It works just like the other time predicates. I had hoped to integrate this new timestamp manager and update pipeline into the sidecar and Client API systems, but there was so much cleanup to do that I just ran out of time. I'm very glad I did this work, and there's now a much nicer base to work with and extend time tracking in future--like if I want to figure out a 'fill in some good guesses for retroactive archived times', it wouldn't be so much of a hack now--but there is more to do. other highlights All multi-column lists across the program now show ▲ or ▼ on the column header they are currently sorted by. This is something I meant to add for ages; now it is done. All menus across the program now send their commands' longer description text to the main window's status bar. They also show it on tooltips now. These descriptions have been in the program the whole time, but many were difficult or impossible to see. Let's see how annoying the tooltips are--I'll add an option or turn them off if they keep getting in the way. I fixed a stupid bug from v521 that was causing the advanced deletion dialog to always set the default reason. Sorry for the trouble--it was a testing issue! next week A mix of small work, just general catchup. I'd like to also get timestamps working in sidecars and/or the Client API, but we'll see.
(153.41 KB 1920x1080 please.jpg)

>>19550 >timestamps Feature Request!!! Like you are already on it, this might be the best time to patch a new feature: "User's Timestamp" The reasoning is to mark files with a custom date or range date in order to narrow searches. For example: a) 2020, or 2020-12, or 2020-12-06 b) circa 2006-2011 = from 2006 to 2011 c) the 70s = from 1970 to 1979
>>19550 Not sure if you thought about this before, but adding a secondary sorting option inside tabs would be pretty nice. I know there's a global setting in the options, but I want some tabs have their own. Maybe add a checkbox into options, that would enable this, so that it wouldn't clutter the ui for people who don't care.
>>19536 >Sorry if this is a stupid question, but is there any chance that these bad subs have a non-default file import options set? I just checked, and they don't. I also confirmed that the loud imports work correctly when I switch them to showing inboxed files only. For some reason, quiet ones just aren't following the settings. I don't know what the problem could be. It's frustrating.
>>19502 sorry for the late reply, but here are some tumblr links for you to figure out eventually: non-subdomain post link: https://www.tumblr.com/that-twink-over-there/711440817034788864 subdomain post link: https://hydrus.tumblr.com/post/144928192954/i-swapped-out-my-server-hardware-this-week-this subdomain 'image' link: https://hydrus.tumblr.com/image/144928192954 "image" url (should redirect you to an html page when clicked; if not then refresh): https://64.media.tumblr.com/c58f13258a64ce7ce4a94b3d5ec5a88b/tumblr_o7r9v6WHS81qznht1o1_500.jpg note that these are only 'image' type tumblr posts. ideally you would also want to have hydrus scrape embedded images in 'text' posts as well
(132.90 KB 1248x1052 zoe quinn.png)

(616.03 KB 460x156 devolution.gif)

(101.19 KB 800x777 old vs new furries.jpg)

>>19537 >'related tags' for your current search page is a really interesting idea. I'll have a think about it. Thanks. Shouldn't be hard to implement given it's already there for adding tags, so I have high hopes for this feature. >Filenames The current option to import filenames as "filename:" tags is already sufficient for those that want to keep them in my opinion. I still mess around with filenames regularly for easy set ordering, filename jokes, and easy dissemination of image sources when posting online. As for search bloat, it hasn't effected me as of yet. Very few files have the same filename, so most filename tags are just (1)s at the bottom of any given search with all the relevant results sticking to the top. I have about 12.8k files all with filename tags and even entering "filename:*" still brings them all up instantly. >>19536 >https://hydrusnetwork.github.io/hydrus/youDontWantTheServer.html >Hydrus Web >Hydroid >Let's you view and even manage files from your phone As in files that are still on my computer? How hard are these to set up? I've wanted to do exactly this, and asked about remote desktop access elsewhere, finding that it's a big security risk if I do it wrong and many remote desktop applications charge you money. If I could just browse, not even edit, my Hyrdus files from my phone, that would be amazing. >>19542 >Not having to alt+tab or click would really speed it up. F3 instantly closes or opens the tag dialogue. If closing, it will save any tags you've added/removed. >What's the fastest way to tag manually? I import 50-300 files at a time grouped by folders I already have organized, and have a workflow of general tags I add to most of the files in chunks, then I add tags to many of the files at once based on the folder they came from, then I go through individual files and flesh out the tags based on more unique traits of the files, archiving each file as I do. My workflow has become a lot more uniform since I hit the porn folders. I process them as follows. >Lewdness: safe, lewd, explicit (I tag what makes each file explicit, exposure:nipples, cock, anus, pussy, etc., with "explicit" as a parent tag) >Eros:non-erotic, semi-erotic, erotic (Just because there's nudity or lewd subject matter doesn't mean I find it sexually appealing) >Character count: 3girls, 2boys, 1futa, many boys, six, seven, nine, etc. (counts that include sex have sex:male, female, etc., as parent tags) >Character:* & origin:* (who the characters are if necessary and what medium they're from since many franchises are multimedia) >hair:* (length and styles) >Colors:Black & white, bluescale, hint of color (this usually goes by very fast. Hint of color is for things that are mostly B&W with some splash of color, and tagged based on what has the splash of color) >hair:* (colors. For this and eye color I filter out anything with colors:* tags) >eyes: (colors and unusual pupils) >Clothes color:* >Clothes:alternate outfit (for this I filter down to everything that has a character:* tag) & clothes:nude (both of these go by very fast so I group them together) >Folder specific tags >Individual tagging I made the mistake of not realizing how much I wanted clothes color:* tags until I already had 12k files archived. Took me days to go back over them all, but thank to my existing tags, I was able to quickly filter out tons of files for which clothing color would be irrelevant to me. It's a whole lot of autism. I archive about 70-80 files a day and should be done in about 4 or 5 more months, which is about a year since I started, but I am manually tagging a personal collection of 8-9 years worth of personally collected and curated files. >>19547 >I've added 'system:archived time' for searching. Very nice. >>19551 You sure you don't just want to make a "time period:*" namespace? It's what I do, and I think it might be a better option, as sometimes a file may be relevant to multiple times, but not a time period in general. Like this first one I have tagged as 2012, 2014, & 2015. It doesn't cover a particular time period, but instead has posts from several particular dates. Furthermore, for time ranges, like 2010s, you cane make parent tags for child dates like 2010, 2017, etc. I still need to do this, but typing in "time period:201*" works just as well for now. Another example are these comparison images with dates from different decades.
>>19541 Yeah, I have been talking with several people about this recently. Twitter continues to close up. I think it is appropriate to say that hydrus just can't handle this problem very effectively any more, so I need to rethink. My current plan is first to rename the default hydrus twitter downloader to 'sfw only'. Then I'm going to refamiliarise myself with gallery-dl and see how they are handling it. Whether they handle it well or not (their github issues suggest they are having their own difficulties), I know they'll be in a better position than most to keep support going in the future, so I'll write up a paragraph or two for the hydrus help on how to make a gallery-dl hydrus import workflow for difficult sites. In the future, I would like to plug into gallery-dl and yt-dlp, and anything else similar, using an 'external exe manager + pipeline', so hydrus doesn't have to think about these difficult-but-already-solved problems so much. Still in early planning phases, but this stuff is on my mind. For now, I'm using twitter as a place to find cool art, and then I chase up those artists I like on boorus. >>19542 You've maybe seen that I pass page up/page down events up from manage tags to the normal media browser; I will see if I can do similar for the archive/delete commands. Like if manage tags catches an F7, it would be sensible to pass the 'archive' event up to the filter above it, right? I'll see what I can do. One side thing though, I like to assign my most-used tags to some simple shortcut keys like 'd'. Makes it super simple to set 'favourites' to a bunch of thumbnails all at once etc... >>19543 Once you move to a complicated database, I don't trust my ability to write a decent backup system that doesn't fuck up, so I recommend you move to using a proper program like FreeFileSync. Please move your original backup folder to a good new location, somewhat in the same shape as your now migrated install, and then set up a backup like I talk about here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#the_powerful_and_best_way_-_using_an_external_program I presume the 'multiple locations' here are because your database files proper (client*.db files) and your media files (client_files) folder are now in slightly different locations? EDIT: I looked at the code--it specifically gets pissed if your files aren't all stored under '(database_directory)/client_files'.
>>19546 I have absolutely no experience with this, but if you aren't super technical yourself, I recommend you check out one of the free sites that hosts a booru for you, the biggest I'm pretty sure being: https://booru.org/ https://booru.org/create I presume they are all public though, so if you need privacy, I guess you are hosting yourself. I knew a guy a million years ago who said moebooru was great, and I talked to a guy last year who said danbooru was hellishly complicated. But I'm talking from ignorance. >>19551 This is an interesting idea. I've always wanted some rich date searching, particularly stuff like 'show me art from the 17th century'. If you mostly just want to say circa, then I think you can use the 'domain modified dates' already. You don't have to put a web domain in there, I think you can put any string, so if you entered 'created', or 'circa', or whatever as the string, my timestamp manager will calculate a new aggregate modification time, presumably selecting your entered timestamp as the earliest, and present that in-UI right now. system:modified date will work with it and everything. HOWEVER, two problems: 1) setting a clean year/month date, like 2002-01-01 00:00:00 is really fiddly and pain in the ass, so I should add some buttons to my DateTime edit widget to let you zero out the granularity real fast and have prettier dates, and 2) I use unix timestamps to back it all, so you can't go earlier than 1970 in this system! >>19552 Yeah 100%. I keep meaning to sort this out and then it slips. I agree that there should be a hide/show checkbox or some nicer dynamic UI to handle this--the sort/collect area is already super cluttered, I hate it. >>19554 Thanks, I'll have a look. Can't promise anything, but who knows.
How did you go frame by frame through a video in Hydrus again? Ctrl+Arrow keys makes it jump around.
>>19557 >1) setting a clean year/month date, like 2002-01-01 00:00:00 is really fiddly and pain in the ass, so I should add some buttons to my DateTime edit widget to let you zero out the granularity real fast and have prettier dates, and 2) I use unix timestamps to back it all, so you can't go earlier than 1970 in this system! For these reasons and the purposes anon described, I think simply using time period tags I described here >>19555 may be better and simpler.
>>19555 >Hydrus Web >How hard are these to set up? If you've ever set up a Quake server and/or messed around with a service like no-ip.org (which lets you convert iamcool.no-ip.org to your home IP address), you are good. Basically you need to know enough about routers and ports to open up a port forward, or do it automatically with UPnP, or if you are super advanced a reverse proxy using a VPN server-hosting service, and then you basically turn the Client API on, point it at the opened port, and then dial in your host:port and Client API key in the Hydrus Web interface, and you have elf babes on your cell phone. WARNING WARNING: The Client API is unencrypted unless you do a load of extra technical bullshit, so just know this is http traffic, not https. Someone snooping on your traffic, i.e. your cell phone provider, would be able to see your images. Actually, now I think about it, I read about some consumer VPN service offering 'meshnet' or something recently, which lets your mobile device join your home network from anywhere in the world--essentially they did all the port forwarding and encrypted tunnelling stuff for you--and that could be an ideal way to run this setup, since you wouldn't care about the unencrypted http part at all, and the host:port you'd put in would be your home IPs, I think? Idk, but something to consider, if that's at all in your wheelhouse.
>>19560 None of this is in my wheelhouse, so I guess it's finally time to become tech literate.
>>19558 Under file->shortcuts->media viewers - all, there's two actions, 'move animations one frame back/forward'. Looks like the default shortcuts are ctrl+b, ctrl+n. I don't actually know how well these will work with mpv IRL, I haven't used them in years, but try it out, I think it'll work! >>19559 Yeah maybe so. I always like the idea of being able to quantify these sorts of things and search and sort them with proper dynamic ranges. When I first started writing hydrus, I always thought I'd write special search tech for bust size, kek, so you'd be able to say 'system:boobs > D cup'. Never happened of course, but 'system:tag as number' can do some neat things sometimes.
>>19562 > Looks like the default shortcuts are ctrl+b, ctrl+n Thank you. I'm retarded for not checking the shortcuts.
I don't know what exact version this started but when i updated to 522 files downloaded from pixiv with hydrus companion now includes the translated tags. I already have siblings set-up for most of the japanese tags, so this new behavior just makes a mess and creates more work. How do i disable this and only have it download the japanese tags like before?
>>19561 >>19560 >I read about some consumer VPN service offering 'meshnet' or something recently I know anon cannot really vouch for anything, but Tailscale is very good for this, and is free for up to 20 devices/account. It also has simple to use apps for every major OS, including Android and iOS. That should make things a lot easier!
(1.74 KB traceback.txt)

Searching with "Collect by" throws the error "'int' object is not subscriptable" and doesn't finish loading. Started happening just after updating, new bug or did I skip too many versions?
>Look at Hydrus Web Git >Totally lost >Look at Hydroid <Building for Android or WebAssembly <The easiest way to get these working is to download the official Qt installer and install the precompiled Qt versions for these platforms >Go look for how to do that <Getting Started with Qt for Android <In order to develop with Qt for Android, you will also need the following prerequisites: <Java Development Kit (JDK) for Java development <Android SDK Command Line Tools for managing dependencies required for developing with Qt for Android, including: <Android SDK Platform <Android SDK Platform Tools <Android SDK Build Tools <Android NDK Is this all saying that if I want to use these Hydrus remote management tools, I have to build an app for them on my phone? I thought figuring out secure port forwarding would be the hard part, but that seems relatively simple if I do it through my VPN, I think.
Do you think you could add a feature where tags can have notes, like files can? There's info about tags that I want to write down, but because there's no way to do it in hydrus like I can with info for files, I just have to keep separate notes somewhere else. Being able to give notes to tags would be very useful to me, so I can stop having these note files where I have to mention which tags each note is about, and just actually attach the notes to the tags directly.
Question. There is a new component called "speedcopy" that it is missing. What Linux package I have to install? I'm using Debian 11 with the "tar.gz" package.
(35.42 KB 508x491 3w57x3.jpg)

>>19567 >phonefag No further comment.
skimming the thread it seems like people started having issues with sankaku around 3 months ago. it was working fine for me at-least a couple weeks ago and now it seems to be broken. any thoughts why this didn't stop me until recently? and is it completely given up on?
>>19570 What's wrong with secure mobile access to my files?
(100.24 KB 1392x785 choice-meme-close-1392x785.jpg)

>>19572 Beyond the unforgivable fact that you are carrying your own electronic leash, there is the problem of the platform you want to use to access Hydrus. Then the issue comes down to: ..... >phone >secure Pick one.
>>19571 You have to use the beta, and you have to find the token in the web page code, which I forget exactly how to do. Open up the web page code (Inspect) and seach for it. It's in the GETS, I think. You have to put that token in Manage HTTP Headers in Hydrus. And it's only good for 48 hours, if I remember right. URL's doing this are only good for one hour, so you have to download in very small bursts. The token has 'Bearer ' before it, so you can search for that. After that, it's a long string of letters and numbers. It's reset every 48 hours. Someone wrote how to do this on the internet, so you can search for that.
Well, looks like sankaku decide to do another annoying change, this time the changed their file page URL. They went from this: https://chan.sankakucomplex.com/post/show/33115890 to this: https://chan.sankakucomplex.com/post/show/61b97e5306752d5c701dc30d3ff7a17d Fortunately is an easy fix.
After the last update, getting this error every time I start my database, and it stops loading one of the tabs. After further examination, it happens when I set a tab to "collect by " a tag, and it only happens when I do a file search in my second local file domain in the database. v523, win32, frozen TypeError 'int' object is not subscriptable Traceback (most recent call last): File "hydrus\client\gui\ClientGUIAsync.py", line 88, in _doWork HG.client_controller.CallBlockingToQt( self._win, qt_deliver_result, result ) File "hydrus\client\ClientController.py", line 481, in CallBlockingToQt raise e File "hydrus\client\ClientController.py", line 420, in qt_code result = func( *args, **kwargs ) File "hydrus\client\gui\ClientGUIAsync.py", line 54, in qt_deliver_result self._publish_callable( result ) File "hydrus\client\gui\pages\ClientGUIPages.py", line 1085, in publish_callable self._SwapMediaPanel( media_panel ) File "hydrus\client\gui\pages\ClientGUIPages.py", line 550, in _SwapMediaPanel new_panel.Collect( self._page_key, media_collect ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 1996, in Collect ClientMedia.ListeningMediaList.Collect( self, media_collect ) File "hydrus\client\media\ClientMedia.py", line 780, in Collect self._collected_media = { self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] ) for ( key, medias ) in keys_to_medias.items() }# if len( medias ) > 1 } File "hydrus\client\media\ClientMedia.py", line 780, in <setcomp> self._collected_media = { self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] ) for ( key, medias ) in keys_to_medias.items() }# if len( medias ) > 1 } File "hydrus\client\gui\pages\ClientGUIResults.py", line 2735, in _GenerateMediaCollection return ThumbnailMediaCollection( self._location_context, media_results ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 5158, in __init__ ClientMedia.MediaCollection.__init__( self, location_context, media_results ) File "hydrus\client\media\ClientMedia.py", line 1601, in __init__ MediaList.__init__( self, location_context, media_results ) File "hydrus\client\media\ClientMedia.py", line 470, in __init__ self._RecalcHashes() File "hydrus\client\media\ClientMedia.py", line 1750, in _RecalcHashes deleted_to_timestamps[ service_key ] = max( deleted_timestamps, key = lambda ts: ts[0] ) File "hydrus\client\media\ClientMedia.py", line 1750, in <lambda> deleted_to_timestamps[ service_key ] = max( deleted_timestamps, key = lambda ts: ts[0] ) TypeError: 'int' object is not subscriptable
>>19564 try this: network > downloader components > manage parsers > pixiv file page api parser on the content parsers tab, there should be something with a name like "translated tags". delete it. >>19575 god i hate sankaku
>>19577 This fixed it. Cheers.
>>19577 Yeah, I think I'm pretty much done with Sankaku. I think they mostly just suck off of Pixiv, so what's the point?
>>19579 Better tagging?
Is it possible to create a simple new page through the Client API, like selecting files and open -> in a new page? I see /manage_pages/add_files, but I would like to have the capability to create a new page with a list of file ids, or maybe open a page with a search.
So is there any alternative to nitter when it comes to downloading nsfw images from twitter. Because finding alternate sources for every artist subscription is an absolute nightmare, especially since a lot of them dont even upload anywhere else. Hydrus Companion doesnt even work properly with twitter anymore.
>>19582 A fork (https://github.com/PrivacyDevel/nitter) implemented a login workaround to download and view NSFW tweets. They are hosting an instance with this fork at https://nitter.privacydev.net/, so you have to use that or host your own instance with this workaround until this is implemented in nitter or another workaround is found. The big problem is you hit the rate limit very quickly, especially on a public instance, so if you have the ability to you should host your own, but even on mine with this fork I get rate limited quickly. You could also look into just using gallery-dl to download from twitter using their login system and then importing into Hydrus. (https://github.com/mikf/gallery-dl#username--password)
I had a decent week working on a bunch of different stuff. There's some bug fixes, quality of life improvements, and more work on timestamps--now you can import/export them with sidecars, merge some in the duplicates system, and store timestamps before 1970. The release should be as normal tomorrow. >>19576 Sorry for the trouble, fixed tomorrow! It is when you have deleted files (or rather, files that have deleted timestamps) in your collections. >>19482 I looked at this problem today but I am afraid I was unable to reproduce the issue. I used those bad unicode quote and long-dash characters, but they all COMPOUNDed ok for me. I am really not sure what was happening, especially since your sub-formula parses the text ok. Basically the COMPOUND formula, it turns out, is simple--it just looks for '/n' and does a python text replace, so there's no re--encoding or anything. I think, if you are still getting this, the next step is to gather more test data. If you can provide a pastebin of the page you are parsing, or the URL, and a copy of the content parser or whatever is producing this, I can test my end with the exact same data objects you are using, and we can see if this is somehow environmental or not.
https://www.youtube.com/watch?v=UdiwBiw5dyo windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v524/Hydrus.Network.524.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v524/Hydrus.Network.524.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v524/Hydrus.Network.524.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v524/Hydrus.Network.524.-.Linux.-.Executable.tar.gz I had a good week working on bug fixes, quality of life, and more timestamps. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html timestamps I've added timestamps to the sidecars system. You can import/export any timestamp the file has, so it is now possible to migrate various times (particularly archive and imported) from one client to another. I still need to add some more things to sidecars, e.g. inbox status and ratings, but we are almost at the point where we can do complete client-to-client transfers of all the major metadata. I have plans to make this ultimate goal simple with templated one-click jobs and no need to repeat the file import--just a raw JSON file metadata import/export. Also, in the duplicate system, file modified dates can now be merged, and by default they will--for 'this file is better', then an earlier timestamp on the worse file will be applied to the better file, and for 'they are the same', both files will get the earlier of the two. If you don't want this to happen, check your duplicate merge options and set 'make no changes'. Also in duplicates, if you have URLs set to merge, then any associated domain timestamps with those URLs will be earliest-merged too. Lastly, the program can broadly support pre-1970 timestamps now. 1970-01-01 is a special date in computing, and supporting times before it can be tricky, but I've fudged some general support. It is a bit of a meme, but maybe it'll be neat to apply '1503' to old paintings. The code I was working with says it is good back to 1 AD, but it seems like it can actually go further, ha ha. Anyway, have fun, and let me know if it fails. I know that the '1,504 years 2 months ago' strings are a little inaccurate--it turns out I haven't been counting leap-year days in those calculations, and it shows on the longer durations. other highlights Export Folders now have popups while they work. You can turn them off for regular runs, but now Import and Export Folders force-show their working popups when you manually start them from the file menu. These popups have stop buttons, so you can cancel Export Folders mid-job now too. Also, I removed the stupid and confusing 'paused' vs 'run regularly?' duality. Export Folders now just have 'run regularly?'. I added some special predicate labels, 'system:ratio is square/portrait/landscape', for =/taller/wider than 1:1. They are in the quick-select list for system:dimensions too. next week I pushed it too hard this week and exhausted myself. Thankfully, next week is due to be code cleanup, so I'll try and take it easy and just dive into some simple refactoring work and do some small jobs here and there.
>>19585 >I added some special predicate labels, 'system:ratio is square/portrait/landscape', for =/taller/wider than 1:1. They are in the quick-select list for system:dimensions too. Schweet. >All that work put into pre-1970 timestamps >Could bug out >Doesn't account for leap years I must repeat, it is probably a lot easier and simpler to use "date:" or "time period:" namespaces for times that aren't metadata about the file itself, but simply relevant to a file.
Is there a way to filter images with alpha?
Is it possible to spread the database out over multiple drives (ie. disk spanning)? My drive holding hydrus and it's database is getting full, and I need to push part of the database to another drive. I see you can make multiple "my files", but can you push them out to other drives, and still have hydrus see them all?
>>19588 Yes, you can! I'm running Hydrus from source in one directory, the database in another with the thumbnails, and the files themselves on another (slower) drive. You can't spread the database itself though ("db/*.db"; these need to stay in the same dir AFAIK); only the files Hydrus indexes. Check out https://hydrusnetwork.github.io/hydrus/database_migration.html#pulling_media_apart, but keep in mind that using this will disable the backup functionality from Hydrus; you'll need to use an external program (I use rsync, for example).
Feature request: sort by sha256sum. My reasoning is that it seems random to humans but doesn't change when refreshed. I've noticed that when I use the archive/delete filter on randomized files I realize that I don't actually want a file more than if I sort by time/creator/etc. My guess is that I think something like "oh, this artist's stuff is pretty good" and then don't actually think about whether or not I want to keep each file. I could just use random sort, but I have a habit of refreshing to remove deleted files. I know you can do this without refreshing, but I do it without thinking. If I refreshed on random, everything gets scrambled again and I can't really make progress.
>>19589 Thanks. Yeah, I'm wanting to spread the database files themselves. I've got like 1.6 million files now, and I need to span some of the files onto another drive or two. It kind of hints in the documentation that you can, so I'm backing up my hydrus install right now before I try it.
>>19590 This already exists. Sort by → file → hash.
>>19586 >I must repeat, it is probably a lot easier and simpler to use "date:" or "time period:" namespaces for times that aren't metadata about the file itself, but simply relevant to a file. That is what I have been doing, but if OP can develop a neat internal date search it would be noice. Bonus points is such a search allows seeking among time ranges.
>>19592 Damn, must've missed it. Thanks mate
>>19584 >I looked at this problem today but I am afraid I was unable to reproduce the issue. Sorry for the confusion, I've finally figured it out, it's because I do a "decode from unicode escape characters" in the pre-parsing conversion. It's in the first subsidiary page parser ("script json getter") which actually gets the json from the html. If I look at the stuff from the "post pre-parsing conversion" tab, it's already got the weird characters like â and € and œ. I'm not sure how to solve this because I obviously need to unescape the json to actually get anything from it. If you still want to look at it, I've attached the parser. It uses the creator's avatar as a placeholder file to attach the text to. It's not really useful though because the text is inside a bunch of markup. I should just create a standalone python script to do all this instead. Sorry for wasting your time looking at the compound stuff. My screenshots are definitely misleading, because I lied to make them. In the parser editor, for some reason the first subsidiary page parser ("script json getter") fails to populate the "post separation" tab. So when you open the second subsidiary page parser ("this deviation getter"), it doesn't have any data from the previous window. You can try it yourself with the normal deviant art parser. I don't know why it's like this, because it clearly works as a whole when you test or use the parser. Anyway, this makes testing annoying because I have to manually replicate the changes that the separation formula does and then paste those into the raw data tab for the second subsidiary page parser or other nested parsers. Unfortunately I wasn't accurate in my replications, so I didn't find the real issue. If you want to test the attached parser, I've also attached the raw data that should be pasted into the second subsidiary page parser to facilitate testing. (Though by the time the second subsidiary page parser gets the data, it's already messed up.) This time it should be accurate. Now to continue this wall of text even further. You know how when you're editing a parser and then click cancel, it says "you've made changes, are you sure to want to cancel?" It would be nice if it said that for the "manage parsers" dialog as a whole. You can add or change a bunch parsers and then accidentally click cancel on the main "manage parsers" dialog and it's all gone in an instant. Speaking of that dialog, it sometimes appears in cases where you've made no changes. I've found that trying to close the hentai foundry file page parser will always trigger that dialog even when you did nothing.
>>19591 Well, I'm spanning the db now. Going to see if I can put 10% on another drive. I'll let you guys know how it goes.
>>19575 can you please detail that fix for someone with no tek knowledge?
>>19565 Thanks, I investigated this and it looks pretty cool! There also seems to be a 'run absolutely everything yourself' solution in 'Nebula', here https://github.com/slackhq/nebula I'm a crazy shut-in and don't actually need this tech as much as I like it, but I'm going to keep it in mind. It feels like it should be super useful to be able to neatly and securely VNC/whatever to my machine from a tablet anywhere in the world. >>19566 Sorry, fixed in v524! I messed something up with a typo and my collections tests missed it. >>19567 Unfortunately almost all the experimental stuff here is a massive technical pain in the ass and you need to have some experience with web/app tech to get going. The app stuff is beyond me, if it makes you feel any better, but I think you have strayed into the more complicated side of things. The easiest solution right now is the main Hydrus Web interface, right here: https://hydrus.app/ Just get your port forwarded and put the host:port and Client API key in there and you'll be off. >>19568 Yeah, I'd love this sometime. All the big boorus have a tag definitions wiki, and it would be neat (and, unfortunately, a big project) to add that capability to hydrus. Doesn't have to be super clever--as you say, the first version can just be raw text notes on tags. It'd be particularly neat to have import/export too, so we can parse existing definition wikis and suck them into hydrus. I had dreams once of a fully interconnected tag network where every tag had a language/slang-level tag so you could set automatic translation/presentation (even to 'vagina' vs 'pussy'), but this stuff gets complicated fast. I should stop dreaming about ideal solutions and just get a 1.0 out the door. >>19569 Don't worry about it. It was an optional experiment for some source users who wanted to test it out (it apparently speeds up some NAS copies), but I never got glowing reviews back, and I did get one 'it broke, fuck' response, so I think I'll probably retire it sometime.
>>19571 >>19574 >>19575 As background here, the hydrus sankaku downloader just doesn't work for a good number of people. I assume they have some IP blocks on certain country/VPN ranges or something. Thanks for letting me know about the md5 URL change I don't know if they did it to make it more difficult to download from the site, but I've been given a fixed URL Class and I'll roll it into next week's release. It may prove havok with some subscriptions though, since it'll think they are all new URLs -_-. I'm sort of leaning against them, though, in the same way I'm leaning against twitter. Some sites are just proving a massive pain to deal with, so if and when you can, please move to more open places. When you can't migrate, I just did a test and Gallery-dl seems to do sankaku fine. You don't get tags and stuff, but it works! >>19581 Not right now, but I'd love to add it in future. Top priority for the Client API right now is getting file metadata more fleshed out with ratings and timestamps, and getting rating/timestamp editing working, and then I can do some more page management stuff. Since I added the file and tag service selection stuff, it should be much easier now to let you open a new search page on the particular file search context you want. Check this for an example, but instead of tags_1 and tags_2, there will just be one of them: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_file_relationships_get_potentials_count >>19582 >>19583 I tried out Gallery-dl on twitter this week, and it worked great, even for nsfw. You can give it your user/pass, a cookies.txt, and apparently it can rip cookies straight out of firefox/safari too. You don't get any tags, but it'll work for one-off jobs. >>19587 Not yet, but I'd like to add a database cache for this info when I push on search in the duplicates system.
>>19595 Thanks, this is interesting. There must be a way to wangle this, I'll check it out. >You know how when you're editing a parser and then click cancel, it says "you've made changes, are you sure to want to cancel?" It would be nice if it said that for the "manage parsers" dialog as a whole. You can add or change a bunch parsers and then accidentally click cancel on the main "manage parsers" dialog and it's all gone in an instant. Sure! >Speaking of that dialog, it sometimes appears in cases where you've made no changes. I've found that trying to close the hentai foundry file page parser will always trigger that dialog even when you did nothing. Yeah something is busted. I had this before--it is because every time the dialog opens an older parser, it rejiggers something into a newer format, and so I have to compensate--but it has happened again. I'll fix it!
>>19599 Yeah, i tried out Gallery-dl and it works fine. It doesnt really solve the issue of subscriptions though. I guess i'll just pause my twitter subscriptions and hope a solution is found eventually.
(15.20 KB 622x267 go to.png)

(60.64 KB 1190x608 before.png)

(69.64 KB 1214x598 after0.png)

>>19597 Is easy, go to network>downloader components>manage url classes (see pic 1). find "sankaku chan file page" and edit (see pic 2) "any number of numeric characters" "example url" and "normalized url" to look like on pic 3
>>19598 >https://hydrus.app/ Thanks. This was fairly simple.
Qubesanon here, having some issues with audio in mpv. I may have fucked something on my end, but for some reason audio isn't working in the media viewer. Preview works fine, though. Setting preview to share the same volume as the media viewer didn't do anything, and the volume slider widgets seem to be broken by the magic ritual I use to make mpv embed properly. Is there a way to reset volume/mute/etc outside of the little widget thing that pops up when you mouse over it? >>19599 >You don't get any tags You can configure it to, but the configuration syntax is a bit obtuse (at least to me).
Hydev, is it possible to make hydrus respond to the buttons on my wireless headphones? I tried adding a shortcut for pressing a button on my wireless headphones, but hydrus doesn't recognize it. I don't know exactly how it works, but I think you would use the Qt key enum from here: https://doc.qt.io/qt-6/qt.html#Key-enum and the media keys like these: Qt::Key_MediaPrevious Qt::Key_MediaNext Qt::Key_MediaTogglePlayPause
>>19596 Spanned the database, and everything looks to be working ok! Database is spread over 2 drives now.
>>19583 You can make your nitter parser for https://nitter.privacydev.net/ parse the tweet but get the files themselves from another nitter instance. Puts less of a strain on privacydev's servers, I guess? For example, let it parse https://nitter.privacydev.net/Twitter/status/1601692766257709056 but once you've parsed it, ask for https://nitter.lacontrevoie.fr/pic/orig/media%2FFjpaSdWWAAUODJy.jpg instead of https://nitter.privacydev.net/pic/orig/media%2FFjpaSdWWAAUODJy.jpg This works for NSFW tweets as well.
(32.71 KB 1230x221 bug1.png)

(9.73 KB 1228x79 bug2.png)

Is it just me or did the sankaku channel parser/url classes break somehow? I haven't changed anything but all of a sudden it's not recognizing post urls, it think that they are gallery urls, even though they're definitely in the post format.
>>19602 >>19608 Never mind I am stupid and can't read. My bad.
>>19602 Thanks. I would not have figured that out on my own.
>>19604 Me again, figured it out. I had accidentally muted audio in the media viewer somehow before activating mpv magic. I can't change volume or mute/unmute after doing it, so the solution was to unmute it before the ritual. While trying to figure it out, I found that it seems there are no keybindings for volume aside from global mute/unmute.
I had an ok week. I fixed several bugs (including a really bad one related to deleting inc/dec rating services), added a couple unusual features like jpeg subsampling detection and File URL redirects, and updated several core libraries (e.g. Qt) for all users. The release should be as normal tomorrow.
>>19599 Gallery-dl itself may not get tags, but hydownloader (which uses gallery-dl under the hood) does. I've found it works very well for what I use it for.
>>19613 I think it can also download ugoira.
I'm running a gallery downloader on files on a website to grab the tags from that website, but I noticed an issue. Sometimes, a file will appear in multiple posts, but since the gallery downloader already saw that file-url the first time, it only fetches the tags for the file that were on the first post it saw, and then just do nothing for the other times it sees that file url again. How do I get the downloader to always try to fetch the tags for a file, even if that fileurl was already downloaded by the gallery in another post? The "force page fetch even if url recognized" doesn't fix the issue, because This only forces it to fetch the page and get the tags for the first time that the file url appears and shows up in the file log. I want it to fetch the page (and thus, tags) for every post that appears in this gallery search, and associate those tags with that file url so it can add the tags from all posts to the file, not just the first one it comes across. This seems to only be an issue with gallery downloaders, because subscriptions seem to correctly fetch the tags of a file url that come from different post urls. I'm guessing this is because hydrus won't touch a file url that's already showing up in the gallery file log, so it doesn't add the additional tags that different posts have for that file url.
https://www.youtube.com/watch?v=MgNCjYXCxOc windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v525a/Hydrus.Network.525a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v525a/Hydrus.Network.525a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v525a/Hydrus.Network.525a.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v525a/Hydrus.Network.525a.-.Linux.-.Executable.tar.gz There was a problem with the initial v525, so I rolled back the OpenCV update in this new v525a. If you installed v525 and cannot boot, please delete 'install_dir\cv2\cv2.cp39-win_amd64.pyd' or just do a clean install! If you are an advanced Linux user and use the built release above, please check the note at the top of the old v525 release here: https://github.com/hydrusnetwork/hydrus/releases/tag/v525 . Thank you! I had an ok week. There's a mix of small work and some library updates for everyone. Full changelog: https://hydrusnetwork.github.io/hydrus/changelog.html new library versions Some advanced users have been testing new library versions for Qt (user interface), OpenCV (image processing), and, on Windows, mpv (audio/video display). There haven't been any reported problems so far, so I am rolling these new versions into the regular build and automatic setup scripts for source users. If you use one of the builds above, you don't have to do anything special to update--with luck, you'll simply get some subtly improved UI, images, and video. If you are a source user, you might like to run the 'setup_venv' script again this week after pulling, and it'll update you automatically. If you are a Windows source user, you might also like to get the new mpv dll, which requires a simple rename to work, as here: https://hydrusnetwork.github.io/hydrus/running_from_source.html#built_programs I don't expect this to happen, but if you are on an older OS, there is a chance you will not be able to boot this release. If this is you, make sure you make a backup before updating, and then if you have problems, roll back and let me know, and we'll figure out a solution. If you can't boot after an update, but a fresh extract on your desktop works fine, then you probably have a dll conflict, so run a clean install: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs other highlights I fixed some more time/calendar stuff. Some longer time durations now compensate for leap years and span month-deltas more precisely. In the media viewer, clicking the 'exif and other embedded metadata' button up top on jpegs now shows their subsampling (e.g. 4:4:4) and if they are progressive. I fixed an awful bug related to deleting the new inc/dec rating services. If you deleted one of these and found you couldn't load many files, sorry for the trouble--the underlying problem is fixed, and updating will correct any bad records! The Sankaku downloader gets some new download objects this week to handle their recent shift to md5-based Post URLs. Unfortunately, this is going to cause any sank subscriptions to go slightly bananas on their next check, hitting their file limits since they think they are seeing new URLs. There's no nice solution to this, so you will pretty much just want to wait them out. next week More small stuff like this I think. Maybe I'll squeeze some API work in.
Edited last time by hydrus_dev on 04/26/2023 (Wed) 22:57:10.
Tried googling/looking in the FAQs and cant seem to solve this. Why do I only get a small portion of posts when I download a tag from Gelbooru? Sankaku gives me hundreds with the gallery tag search, but i can see Gelbooru is only giving me a small portion?
I was randomly using hydrus on linux just yesterday and noticed mpv going absolutely bork. When you'd select a video, either in thumbnails or media viewer, the usual player just stays blank with only the viewtime slider moving and mpv pops out somewhere random in its own window. (image 1) After closing/switching to a picture mpv stays active as a black window (image 2, there's two mpv windows overlapping) You can't close these windows independently either, they exist until you close hydrus. Furthermore, hydrus dropped a log message that libmpv core broke (txt file) I haven't used hydrus on linux much but intend on doing it more and more, so might as well get to fixing it. If anyone has an idea on what's happening, happy to try stuff out
>>19618 Sounds almost exactly like what I've been experiencing on Qubes, but since Qubes has a special window manager I'm able to try and close the black/floating window easily. Try the "ritual" I outlined here, if you can: >>19447 You might have to find an alternative way to 'close' the floating mpv window in order to do it, though. Maybe a keybinding in your WM/DE settings. Audio still works for me after doing this but note that whatever volume/mute settings were active before doing the ritual persist afterwards and can't be changed until you restart.
>>19619 Close, but not quite your issue. For me, it always spawns two rogue mpv windows regardless of what i do. None of them embed correctly, they always pop out. "Closing" them via WM kill just freezes them in the current frame, like you mentioned, but they dont actually go away (you have to minimize to reclaim the space). There's also 3 different log messages when playing around with this issue I'll find out when what happens and describe the issues along with the logs.
So, I just updated to 525a, and now I'm getting this error on boot. ``` Traceback (most recent call last): File "C:\x\Hydrus Network\hydrus\hydrus_client.py", line 19, in <module> from hydrus.core import HydrusBoot File "C:\x\Hydrus Network\hydrus\core\HydrusBoot.py", line 3, in <module> from hydrus.core import HydrusConstants as HC File "C:\x\Hydrus Network\hydrus\core\HydrusConstants.py", line 2, in <module> import sqlite3 File "C:\Python39\lib\sqlite3\init.py", line 57, in <module> from sqlite3.dbapi2 import * File "C:\Python39\lib\sqlite3\dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: DLL load failed while importing _sqlite3: The specified module could not be found. ``` I went ahead and wiped my old venv for this and let setup_venv.bat make my new one. I've got the sqlite3.dll in the root folder, and am running from the venv.
>>19621 So, this was a Python issue. I just updated to 3.9.13 before updating Hydrus, and apparently the Python installer just shat it's fucking pants. Deleted the .pyd files and ran a repair install and it's fine now.
>>19617 Check your whitelist and blacklist. Check search log. Also check file section in imports for size limits, etc. Also, check to make sure download limit is set to "no limit". You might also check if you have a blacklist set in the Gelbooru web page itself. Not sure if that would affect Hydrus or not.
>>19623 Also, it may be that Sankaku has pages per pic, and Gelbooru doesn't.
(137.65 KB 1282x800 hydrus database.png)

>>19556 Is moving the whole thing to a new singular location considered a "complicated database"? I had this setup on my old PC and was able to import and update backups no issue. Here's how my database setup currently looks. Based on your comment >I presume the 'multiple locations' here are because your database files proper (client*.db files) and your media files (client_files) folder are now in slightly different locations I guess I messed something up here when I moved it? Is it because the "database components" and "media files" aren't in the same location? Really all I want to do is be able to import my database backup from my old PC so I can access all my files again.
v525a is causing a crash on Qubes when I try to do the mpv ritual thing (specifically, the opening media viewer part). Site doesn't like my txt log, here's a paste: https://bin.snopyta.org/?7e1832e6946efc24#aiJeZVZDqo5Kx4eWepTZRHg9QXC4LKK9CR83WH1EwQc
>>19604 >>19611 Thanks, adding some more shortcuts for volume control is a good idea. Yeah, I just looked at some of the gallery-dl gubbins here https://github.com/mikf/gallery-dl/blob/master/docs/configuration.rst although I may be looking in the wrong place. That seems to be for export filenames, but I'd imagine there must be a way to suck up these various 'tags' handles and exporting them to a JSON sidecar. Ideally, we'll look at this a bit more and figure out some decent confs/workflows that export what we want to a single JSON file and then I can set up a sidecar template to suck it up again. We'll want this tech even when I integrate gallery-dl into hydrus itself, so I can populate the import object with the same. >>19605 Sure, I'll see what I can do! Ideally, these'll just plug right in. My keyboard has these buttons so I can probably test, but if I can't, let me know how they work for you. >>19613 Ah, great, I'll check how it does it. >>19615 This is strange, than you for the report. By a normal 'gallery downloader', we are talking a normal kind of booru downloader, right? It searches a gallery page, produces post urls, and chases those (parsing tags) to get a direct file url, which it downloads and imports? The 'force page fetch' option is specifically designed to work in this case. You might like to play with help->debug->report modes->network report mode. It will spam you with popups, but I believe it will say when it is downloading a particular page and what parser it intends to use. If you can't figure out what is going on, can you send me the downloader and some example URLs/searches, so I can try to replicate it?
>>19627 >We'll want this tech even when I integrate gallery-dl into hydrus itself Oh shit, I had no idea that was even in consideration. Sounds like great news. I once tinkered with getting gallery-dl produced JSON to import tags and URLs but kind of forgot about it and never finished doing it. I'm assuming user-made downloaders would still be usable alongside gallery-dl, right?
>>19617 If you are downloading spicy content, you need to set a header I think, check here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Gelbooru >>19618 >>19619 >>19620 Aha, thank you for this report. Are you running the built release, as I put out in my release posts, or do you run from source? If you are on the built release on Linux, the best way to relieve crazy interactions like this is to move to running from source. That was all the .so files are native to your platform, from your package manager, and ideally there are fewer hoops to jump through. I can't promise anything though. The guide is here, it is pretty easy now!: https://hydrusnetwork.github.io/hydrus/running_from_source.html I have seen this sort of error before. As far as I can tell, it is basically when your window manager freaks out at the way hydrus's Qt tries to embed the mpv window (obviously), so you might have luck changing window manager, too. Unfortunately the way this works is so hacky and duct-tape that I can't provide a lot of support for unusual situations. Let me know how you get on! >>19621 >>19622 Interesting, thanks! I'm sorry for the trouble, but that will be good to know for future. I hate updating my python, I usually save it for when I get a new computer to dev on, and then I'm merging the various headaches together. >>19625 Yeah the trick here is that 'beneath db?' column. If everything in that dialog is beneath db, then my simple backup routine, which works on one folder, is happy. Your db is here: /home/.../.local/share/hydrus/db Your files are here: /run/media/.../Storage/Hydrus/Database If you files were under /home/.../.local/share/hydrus/db/files or something, then your database would be considered simple. Please move to an external backup solution--it'll work a lot better than my thing anyway. And if you want to import an existing database backup, then just have a look at its folder structure. It should basically have your client.*.db files, and a client_files folder with all your files and thumbnails in. All this 'migration' dialog does is move those things around, so if you want to restore a backup, then just place the components of the existing backup in locations that look good and try to boot the db. Worst case scenario is the files are in the 'wrong place' and it'll throw up a repair dialog when you boot. If I didn't link you to this earlier, lots of background reading here: https://hydrusnetwork.github.io/hydrus/database_migration.html >>19626 Damn, yeah, you are having something like the guy above, with the libmpv MainLoop killing itself, but seems like for other reasons. I am guessing you are running from source? If not, I recommend you also try to move to source. If you are running from source already, try looking at the different versions of mpv available in your package manager. You might like to try, if you know how, updating your venv's version of python-mpv. I have us stuck on 1.0.1, since it works for almost everyone, but there is a 1.0.3 now. >>19628 Yeah my idea is basically to have: - an 'exe manager' that stores profiles of external executables and what they can do with various conf/parameters - upgrade url classes to say 'for this url, send it to xxx exe' Then we'll be able to pipe difficult URLs to gallery-dl or yt-dlp or whatever, and ideally have a framework of sidecars or whatever the hell to get associated metadata with the file. As you say, it won't replace the existing downloader, but it'll give us tools and flexibility for the future. (if you were clever, you could even set up an external API that hydrus could talk to in this case!). Oh yeah, and I expect to add ffmpeg and waifu2x templates to this to start probing at optional automatic mass file format conversion, which is a different topic but also fun.
>>19629 Thanks, I got everything set up again from a backup. That link helped a lot. I will look into other backup options though, as you suggested.
>>19630 Macrium Reflect is a really good backup program.
>>19629 > Aha, thank you for this report. Are you running the built release, as I put out in my release posts, or do you run from source? I run "from source" at the commit of releases and compile the python code to use native system libraries while building the package. >As far as I can tell, it is basically when your window manager freaks out at the way hydrus's Qt tries to embed the mpv window I dunno if it may be caused by running Wayland, possibly? Would be annoying since i dont intend to switch back to X11... Also forgot to get back and post exauhstive logs, apologies. Will do that soon(tm)
>>19632 I was looking at the requirements.txts today and remembered we are straddling two different versions of 'python-mpv', which plugs your libmpv into Qt--0.5.2 and 1.0.1. When you use my interactive 'setup_venv' script, it asks if you want the old or new one and installs one or the other. This doesn't apply to you I don't think, but it turns out the requirements.txt I use for the Linux build is still on the old version, for compatibility reasons I think. Anyway, thinking back on your problem here, since you have a 'running from source' setup, please see if you can edit your venv or build script or what you are doing here and play with whatever you haven't been using of python-mpv 0.5.2, 1.0.1, or 1.0.3 (this last one I was testing today and seems fine and has some bug fixes and mpv MainLoop crash handling). I know that in order to get the mpv 2.0 API working, you need the 1.x.x release, so if you are on 0.5.2 and have a line to update your libmpv to a really new version, you might need it in any case. In related news this stuff is pissing me off so I'm going to see about getting the new Qt 6 Media Player embed working as my medium sized job next week. Should be a better fallback than my native jank.
Does hydrus give a way to know what subscription an image was downloaded from? I am finding images I dont like in my inbox and I am trying to "unsubscribe" from certain artists by removing the artist's name from my subscription list (this is how I download all of my images). Unfortunately I sometimes find that the artist goes by a new alias, so there is no corresponding subscription. I have to go to danbooru and check to see if my subscription list has the artist's old names, and if danbooru's alias list is incomplete then I am SOL. Is there any way to address this currently, or to request that this feature to be added? Thanks anons and hydrus dev
>>19633 Good call, i'll patch in version 1.0.1 and/or 1.0.3 and see what happens. Will be back with more info
>>19634 You might try a Whitelist. I had this problem with Pixiv, where, for some reason, I was getting pics that had none of the tags I had searched for. Putting my search tags into the Whitelist solved the problem.
>>19636 There's also the more complex version of the blacklist / whitelist beside the Whitelist tab. Not sure if this would help you with subscriptions are not, but maybe you can figure something out.
>>19633 >>19635 Actually nevermind, I'm back Obviously i use the system packages which are on python3-mpv-1.0.1. On another note, could it be an issue that i build with Qt5? I didn't get PyQt6 running on my system yet, dunno if Qt5 is still supported officially?
>>19635 >>19638 Aha, I see. Sounds like you are running off your system python. I used to do this myself, but I learned about venvs and now I strongly encourage everyone to move to them for anything more advanced than simple scripting. Forgive me if you know about this all already, but venvs (virtual environments) are basically a python-in-a-box that you can install anything to without messing up your system python with new versions or anything. If I haven't pointed you at my newer 'running from source' help, please check it out here: https://hydrusnetwork.github.io/hydrus/running_from_source.html A few months ago, I wrote some really easy 'setup_venv' scripts that are in the base install of the source extract or github repo clone that set everything up for you. Please give them a go, they'll ask some questions and get all the right versions for you and set up Qt6 and all that (assuming your OS isn't super old and can't run it!) all in a few minutes. If you would rather do things more manually, check the second half of the help page, where I walk you through doing it yourself and talk about venvs more. Qt5 could be the cause of some of the jank here, and Qt6 has a whole load of improvements, so if you can run it, I recommend it. Let me know how you get on! Qt5 is supported, but I can't promise anything. Win 7 users have to use it. >>19634 >Does hydrus give a way to know what subscription an image was downloaded from? Not yet, but this is increasingly on my mind. I need to do a whole bunch of subscription-infrastructure improvement, but I do really want greater integration of subscription-knowledge into the general workflow. I want you to be able to right-click on a file and see that you downloaded it from your danbooru artists sub.
New thread here >>>/t/12094 I'll post v526 to it tomorrow, and this thread should be migrated to /hydrus/ soon. Thanks everyone!


Forms
Delete
Report
Quick Reply