/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

/wsj/ - Weekly Shonen Jump

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(25.39 KB 480x360 80uElI8gtD8.jpg)

Version 322 hydrus_dev 09/12/2018 (Wed) 22:42:38 Id: a3742c No. 9943
https://www.youtube.com/watch?v=80uElI8gtD8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Windows.-.Installer.exe os x app: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.OS.X.-.App.dmg tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.OS.X.-.Extract.only.tar.gz linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v322.tar.gz I had a good week. I took it a little easy after pushing to get over the last big overhaul hump last week. There's some fixes, new help and interesting misc work. downloader help and fixes I am thankful and relieved that last week's big shift seems to have basically gone well. There were a couple of bugs–artstation and tumblr were only searching the first page of their galleries, and the new GUG object had some unclean ui in places–but these are fixed today. Working with that code without having to deal with the old legacy system has been great. I've also significantly extended and polished the help here: https://hydrusnetwork.github.io/hydrus/help/downloader_intro.html If you are interested in writing your own downloader for hydrus, please check it out. It is not short or trivial–it is a powerful system!–but I've tried to make it fairly comprehensive. Let me know if anything is too confusing or worded too complicated. The only bit left to do is 'how to easy-share a downloader' because I haven't written that system yet! And I've added some right-click->try again/skip to the gallery logs for Gallery and URL Downloaders. This is semi-experimental, so if you are an advanced downloader and I've been talking to you about 'resuming' a failed download, please give it a go and let me know how it works IRL. Subscriptions and Watchers work on different gallery sync logic, so these downloaders' gallery logs remain 'read-only' for now. misc You can now put '\' (or '/' for Linux) in file export phrases! It is now easy to export well-tagged files to complicated folder structures, including those based on namespace/tag! File drag and drops started from thumbnails in hydrus now begin with significantly less lag, particularly for very large drops. The 'would you like to do maintenance on shutdown?' dialog now lists a brief summary of the expected jobs. I could improve the descriptions further, so let me know what you think. Tags across the program now (lexicographically) sort according to the new and improved human number sorting method. Thanks to the same user who added the 'cookies.txt' import the other day, advanced users can now mass-set collected thumbnail groups as alternates in the duplicate system. Please be careful with this–the duplicate system is still not good at grouping together larger numbers of files. There's also a duplicate_media_set_alternate_collections media shortcut action to go with it. I've added new debug tools at help->debug->report modes->file report mode and ->data actions->review threads. The review current network jobs under networking should also present some statuses better. If I've been talking with you about these tools, please try them r.e. your problems and we'll see if we can figure anything new out. full list - wrote gugs help - gave url classes and parsers help a pass - wrote e621 html gallery page example help - wrote gelbooru html file page example help - wrote artstation json file page example help - wrote url class links help
[Expand Post]- gallery logs for the gallery downloader and url downloader now support 'try again' and 'skip' right-click menu for gallery log entries. the try again allows just the one page or also allowing search to continue) so, if a gallery query fails for some reason, you can now try again/continue where it broke. subs/watcher/simple downloader work on more complicated gallery search logic, so their gallery logs will remain read-only for now - all gallery log buttons now support right-click menu to mass-export urls to png or clipboard. non read-only also support import - fixed an issue with gallery searches that rely on both api url conversions and url class next gallery page urls (I think just artstation and tumblr by default) not generating the next page url correctly - improved some misc gallery url processing logic - fixed some issues with gallery url generators with invalid example urls causing problems opening the edit gug and gallery selector panels - fixed an issue where you could only delete a gug if it was in an ngug, ha ha - thanks to a different submission by prkc on the discord, collections now have a _right-click->set collections as groups of alternates_ duplicate action (note the duplicate menu only appears in advanced mode). the related shortcut action duplicate_media_set_alternate_collections is also added - export phrases now support '\' ('/' in linux) in the path export phrase in order to create folders. you should also be able to do \[series]\ to create optional namespace folders. slashes in tags will still be replaced with _ - to stop the client sometimes doing laggy vacuum checks every maintenance cycle, vacuums that cannot occur due to limited disk space now will still count as 'done' for the purposes of rescheduling - added 'file report mode' to the help debug menu. This will spam popups as file and thumbnail actions are requested - tightened up some network job status setting to help us debug the 'there are a ton of jobs in network engine, but the three active on this domain seem stalled' issue - wrote a simple 'review threads' panel under help->debug->data actions->review threads. I knocked it together in about ten minutes, and it's likely unstable as hell, but it's pretty neat! - some instances where many file paths are copied quickly (exporting paths to clipboard and drag and drop) no longer do a safety check for file existence, so should be much faster to go. this particularly reduces startup lag for large file drag and drops! - the 'would you like to do maintenance work in this shutdown?' dialog now lists a summary of what it thinks it'll be working on. I _could_ make this more detailed, so let me know how it works for you - tags with numbers should now sort according to the new improved human sorting method–it now shouldn't matter where the numbers are in the tag–as long as the text-and-number-breaks lines up with another tag, they'll be compared each part in turn correctly - fixed some human sorting code for unusual number characters like ⑨. they will be treated as text, not a number, for now - misc fixes next week This easy-share system for the downloader is probably top priority, and some last misc downloader work–cleaning up old code, adding tests, workflow and ui improvements, that sort of thing. With the downloader winding down, I am thinking about what to do next. I would like to get the python 3 update in before the end of the year, so at present, my current plan is to spend a few weeks finishing downloader, spending x weeks on a simple login manager, taking a month off for python 3 attempt, and then working on The Next Big Thing™. I am pretty sick of the download engine and don't want to get bogged down in another quest to write an amazing login engine that can cook you breakfast, so I would like to know what sites you would like to log into with the new download engine so I can draw an appropriate line on how complicated I should go. I know FurAffinity is highly requested, and I'll be converting the existing Pixiv and Hentai Foundry login stuff to something less hacky, but what else would you like? If the login is strange, how is it strange? If you know, what kind of cookies/headers/login process are involved? Would you rather I just make it easier to import cookies.txt from your browser? I don't use sad panda, so can someone who does summarise precisely what cookies/domains/login order/whatever is going on there, and what you suggest would be the best way to automate that? All input on this whole subject would be appreciated.
Twitter account scraper would be a great addition, in my opinion.
>>9943 A Nico Seiga login would be nice for basically the same rrasons as pixiv. There's a couple of other booru-esque sites I had to join because they put "questionable" content behind a login/age check, but they kind of aren't worth the time.
(52.34 KB 1920x1038 hydrus-stuck.png)

Seems like the downloader is now stuck here for some reason. Nothing is "blocked" in the bandwidth UI [nor does anything seem to be at its limit], and toggling override/auto-override on all running tasks also doesn't help. Nor are the network traffic / subscriptions "paused". Is this a bug or am I just missing some setting?
About sad panda, as far as I know your igneous cookie needs to have a string of random characters as content instead of mystery. A guide from /h/ explains in detail: https://pastebin.com/nh9RyrUf >>9948 I was having some problems with Pixiv as well, I don't think the problem is on your end.
>>9949 >I was having some problems with Pixiv as well, I don't think the problem is on your end. Good to know. I wonder if it has anything to do with the gallery downloader for pixiv. Or an interaction between the gallery downloader and subscriptions. Subscriptions alone seem more reliable right now.
>>9949 >3. After at very minimum seven (7) days have gone by from the time you signed up for the account, you may attempt to go to exhentai. >4. You may now have access to exhentai. Why do people use this piece of shit site? Channers avoid every other resource that forces them to sign up or give up anonymity; why is this the exception? Isn't it practically on par with Nyaa in terms of being a cartel anyway?
>>9955 While I agree that it's not optimal, the other option would be removing loli/shota and otherwise controversial stuff completely. About why most people use it, because there's not a decent alternative to it, most scanners and scanlators use panda since it doesn't resize the original images and other websites just mirror from panda. You can't even fucking upload to Nhentai, for example. Also, H@H takes some banwidth from servers and HentaiVerse helps with paying servers. But even on sad panda there are still some stuff that gets removed which is fucking annoying.
>>9955 Because /a/, /d/, /h/, /e/, /u/, /y/, /jp/, all contain a significant vocal minority of deranged actual pedophiles who use loli/shota as a fix because it's hard to get real cp now. Use tsumino and nhentai, and just skip e-hentai/exhentai/any booru which willfully obscured content or for instance allows for tagging only for series. Basically all of these sites exist to provide loli/shota smokescreens at the expense of usability for anything else, set up well in advance because everyone knew it would/will become illegal in the west once statesmen who understand the internet beyond AOL started getting into office. Anything decent will be added to the sites I mentioned/to better boorus eventually anyways.
>>9955 > Why do people use this piece of shit site? Likely because of the content. > Channers avoid every other resource that forces them to sign up or give up anonymity As far as I can tell, that's not really the case.
>>9943 > This easy-share system for the downloader I wonder what this will do… will there be an index of available downloaders inside Hydrus?
>>9957 Just about all of this seems wrong. Might as well claim people reading rape porn are probably deranged actual rapists that just found it harder to find actual rape videos now. A good part of those voicing support for internet ban for this reason are people opposed to anything other than their morals which BTW will likely not include furry porn or male-on-male porn or even BDSM or anything else less conventional either. It's fantasy bestiality, fantasy sodomy, fantasy rape, fantasy child rape. And porn with 60+ year olds may also randomly not be okay. And they band together with the people that would like to control the internet for other reasons they definitely should not succeed. Hydrus would probably be seen as some "enabling" tool because look at pixiv, e621 and the other sites it supports! Good thing most people currently actually aren't of the same opinion.
Most custom boorus with logins seem to be working without too much trouble but a nijie.info login & parser would be appreciated. Many artists fled there from Pixiv due to censorship concerns. Also would like to see iwara.tv and ecchi.iwara.tv support for mmd videos. The download javascript gives a time-sensitive link making it more difficult to automate. >>9955 Unless things have changed recently there was never an "x points required" or waiting period to access exhentai. Going to exhentai before logging in gives you a cookie that prevents you from accessing it even after logging in to e-hentai and people cannot figure out how to clear their browser's cookies even with instructions. Hence waiting it out always works. Source: made several accounts and always get in instantly.
>>9968 Iwara would be pretty good, but some videos are private so login would be needed as well. >>9957 >significant vocal minority of deranged actual pedophiles who use loli/shota as a fix because it's hard to get real cp now. Since when 2D is considered real people?
>>9968 >>9967 >>9970 I-I h-h-haven't updated s-since like v-v-version 311. I a-am scared… O.O
>>9943 >You can now put '\' (or '/' for Linux) in file export phrases! It is now easy to export well-tagged files to complicated folder structures, including those based on namespace/tag! H-holy fuck, I asked about this a few months back. I was never in a place to do so, since I haven't even donated or even used the program to tag much, but holy shit, that's awesome. For my theoretical purpose it'd allow easier sharing for those who do not want to use the program. Amazing. There is literally only one thing I am worried about: I am kinda weary of the heavy-weight nature of this program. I mean, it has tons and tons of features, which is cool, but I am worried it may be too heavy to mount on some platforms, if you knwo what I mean… are my worries unfounded? Also: I once copied my "db" folder for a backup and then just put it back in a hydrus directory when I was done wiping the PC, and it had an error (that fixed itself) - what did I do wrong?
>>9972 > I am worried it may be too heavy to mount on some platforms, if you knwo what I mean I don't know what you mean, but while Hydrus doesn't have ideal responsiveness in the GUI and other small flaws, it's still reasonable on a current PC when managing a good bunch of million files combined with a whole lot of million tags in the PTR and tag databases. Hydrus_dev obviously tried to avoided behaviour that would lead to bad performance at a home collection scale [given Python and Sqlite]. It is probably not fast enough to try and do a local mirror over all notable sites where you can find drawn images [hundreds of millions of images at least plus duplicates, I think] or such a thing. > it had an error (that fixed itself) - what did I do wrong? Very hard to tell without knowing the error.
>>9973 >I don't know what you mean Well, I am worried it'll be too slow for everyday quick use with few images, sub 5000 actually. I mean, even boot takes like 15 seconds. That is what I am worried about. Also I am worried that it won't run on most computers - but that is probably totally unfounded as you point out correctly. >Very hard to tell without knowing the error. Wait… lemme see if I can dig it up… it's probably gone though.
>>9974 Yeah, sorry, no cap of the error anymore. Thanks for you answer still!
I've noticed that most of the *.booru.org sites(besides furry) are running the 0.1.1 Gelbooru beta, but all the parsers are for 0.2.0+, was a 0.1.1 ever made, or did all this start afterwards?
>>9956 >>9957 I've seen no lack of loli content on e-hentai and nhentai. Then again, I'm not specifically searching for loli. >>9965 >Likely because of the content. It doesn't even have a vanilla tag so it's useless to me. >As far as I can tell, that's not really the case. Tell that to /rs/ and /t/. Everybody used tpb, nyaa, bakabt, etc. partially because you didn't need to sign up. Before that was rapidshare, megaupload, sendspace, etc.. And before that there were all the p2p/usenet groups. Stuff like Demonoid was actually the exception, now we have (or had) stuff like waffles.fm, what.cd, animebytes, rutracker, etc. Even now people do plenty of sharing through mega, or via shitty blog links to questionable filehosts in foreign countries. It's more like crackdowns are the reasoning behind trying to make some of this stuff private. But anybody can navigate through rutracker's sign up. Just like anybody can sign up to get past the panda. You can even swap that "x" for a "-" and probably get the exact same gallery on e-hentai. Obviously forcing ratios for the old torrent stuff keeps shit alive too so that's another goal, but these minute barriers do next to nothing beyond probably thwarting bot auto-takedown notices. None of this stops gelbooru and sankaku from hosting loli, btw. They just don't do doujins. And they're even easier to access. You're also ignoring the regular threads about switching off chrome to Brave or whatever the goto "private" browser is these days. And ublock vs. adnauseam. Then there's huge complaints about Win10 and its spying too. Privacy/Anonymity are practically ingrained into a channer by default. Never forget, the default name was always "Anonymous."
>>9957 >guaranteedreplies.txt >>9967 >>9970 >replies.txt >>9978 Yeah, it's not that there's a lack of it, it's that those are either hosted in a country which allows it or are willing to take down the content when the feds come. Whereas sadpanda is too busy pining for the fjords or whatever to ever be useful to anyone who uses it as a website and not a central life obsession.
Reposting >>9969 because apparently I'm a brainlet who can't figure out which thread is the latest… >>9943 Hey dev, The Derpibooru integration works pretty well, but seems to be subject to the site's default filter which hides anything not kid-friendly. You should request no filtering by passing `&filter_id=56027` in with each call to the search endpoints (assuming showing everything is a reasonable default). The filter_id parameter is mentioned briefly on the API docs page [0], and the ID is from the Everything filter on the filters page [1]. [0]: https://derpibooru.org/pages/api [1]: https://derpibooru.org/filters
>>9974 > sub 5000 actually That's almost nothing. > I mean, even boot takes like 15 seconds. Check options -> speed and memory. Maybe most of that time is spent pre-caching the PTR or something? Granted, Hydrus isn't going to be a fast-opening program like sxiv or feh is either way. > but that is probably totally unfounded as you point out correctly Yes, it's probably not a big issue except on ancient machines, and you maybe don't need to start an image collection on all machines including those slower/featuring less RAM than a midrange smartphone today…?
>>9978 >It doesn't even have a vanilla tag so it's useless to me. Vanilla sex? Different between cultures - don't even bother with that tag. > p2p/usenet groups You know, people usually needed to sign up for usenet/IRC somewhere. > You can even swap that "x" for a "-" and probably get the exact same gallery on e-hentai. You think they don't know? > forcing ratios for the old torrent stuff IIRC you don't even get anything on exhentai unless you publish torrents. Ratios aren't enforced. > None of this stops gelbooru and sankaku from hosting loli, btw. Different places for the hosts perhaps? Pixiv also likely won't have a problem unless Japan's legislation changes. Also for hosts within the USA and elsewhere where there is more or less legal uncertainty, the respective hosts probably just have different willingness to take risks.
Tags in the manage tag window are grouping by namespace even if you don't have it set to group by namespace, so you have to scroll to see character/creator tags.
I would be over the moon if the "next big thing" were a more advanced query engine that could handle OR searches. Not sure how high this is on other people's lists, but for me it would greatly improve the way I can interact with my collection. Whatever comes next though, I'm sure it will be great.
>>9946 Yeah, and despite twitter's crazy dynamic loading (and OAuth bullshit for their official api), this should be doable, a la: https://twitter.com/i/profiles/show/Wunderweltworld/timeline/tweets?include_available_features=1&include_entities=1&max_position=1023689134106402817&reset_error_state=false I'll have to play with it a bit more though. Figure out however to propagate the new max_position to the next call–but twitter support would be neat enough that I'd be willing to write a hacky hook for it. Although someone was also telling me that twitter apply actually complete garbage compression on jpgs. It's no hope for the nip artists who solely post to twitter, but maybe we won't end up wanting to scrape it all that much.
>>9991 from the various utilities I've tried over the years, the easiest one is jdownloader which just adds :orig to all twitter images and grabs what pops up. e.g. https://pbs.twimg.com/media/FGSFDS.jpg:orig There's an awful lot of quality content on Twitter that's nowhere else, even if it is compressed to shit, so there's not much else to do sadly.
>>9948 Thank you–this is a bug I tried to pin down better this week with the updated status texts here. Is your client very very busy, like with many gallery importers or other 'active' pages? It looks like you have a ton of queries going on–could it be bigger than 200? I believe this problem is due to the worker threadpool getting overwhelmed with so many network jobs that even when the front few are ready to go, there isn't a thread available to work it so the whole system deadlocks. If this is still true, or if it happens again, please check the new thread debug at help->debug->data actions->review threads. If there are a fuckton of CallToThread objects, I think we've figured out our problem. If the list isn't too huge, please take a screenshot. For now, a client restart should/may joggle your queue and resolve the deadlock.
>>9992 Thank you for this info. I guess they have a secret cross-domain IP-based or iframe-trickery-dickery-doo-based 7-day timer their end that permits the igneous establishment. I think a 'do this login, then wait 7 days before attempting this login' is probably too idiosyncratic and crazy for me to support in a first and simple login manager, so I'll leave it to copying cookies across from the browser.
>>9966 Yeah, it'll be network->downloaders->review and import downloaders. It'll list the different downloader objects as a simplified list and let you drop encoded png files on it to add more. It'll do the work of the dialogs under downloader definitions in a much simpler workflow. Having any sort of remote lookup and "User X has released version 1.2 of gelbooru parser–download?" kind of auto-updating will not be in this initial version. I expect many new easy-share pngs will end up here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/NEW%20Download%20System
shit, >>9994 meant for >>9949 >>9992 Thanks. Yeah, I put :orig on the image url as well. Drag and drop import for image tweet URLs is in hydrus right now, but I was expecting figuring out an easy open 'RSS feed' of a user's tweets in history to be a pain. But I loaded up the developer inspection shit in firefox and just did some scrolling and discovered actually a pretty simple bit of json they use for fetching it. If I can tie that into my new gallery pipeline, I should be able to add a twitter username lookup. We'll have to see about including/discarding retweets and all that when I get into the details.
>>9968 My initial feeling is that I should only support single page logins to begin with. Just simple 'enter user/pass in this FORM, and POST it' stuff that then checks cookies out the other end. Most of the boorus are going to be like that, right? Maybe trying to chase even simple javascript whoop-de-doo is going to be a rabbit hole. The HF login script is more complicated (it applies the search filters after anon click-through login to allow futas etc…), but it could stay hardcoded for now. If the HF login is just a simple user/pass form, I could even replace the complicated anon filter stuff with that entirely and say 'if you want to download from HF, get a throwaway login and set up filters in your browser to your liking', although that may be needlessly inconvenient.
>>9972 >>9973 >>9974 >>9975 >>9982 Thanks. Yeah, clients are so far ok up to a few million files. The users I know with 3M+ are having laggy ui issues, but that is usually because they also have persistent 500 import queues800k download items/ in their front session. I have 1.1M in my IRL client on my (breddy good) laptop and it is fine. There are several things you can do to speed up a client that starts to get a lot of tags/files (like migrating to an SSD/HDD combo), which you'll naturally stumble across as you use the program more. And if you find things run slow, let me know and I'll see what I can do! I don't recommend hydrus for anyone with a real old computer. It is a greedy python program, so it'll want to sit on a couple hundred MB just to get loaded, and if you toss a load of 60fps 1080p webms and 4000x4000 pngs at it, it won't stop and think before it starts eating ram and burning CPU time. 4GB ram bare minimum, 8GB at least if you want to sync with PTR. SSD is highly preferable for the db files. If you chase down the 'client - yyyy-mm.log' files in install_dir/db, you might be able to find the error you got–let me know if you find it, but if your client is now working ok, the error was probably fixed.
>>9979 >>9976 Yeah, if you are comfortable with manage parsers and the rest of that ui, try this one: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/blob/master/NEW%20Download%20System/Parsers/gelbooru%200.1.11%20file%20page%20parser.txt Afaik, the gelb 2.5.x gallery parser works for 0.1.11 gallery pages. I'll probably fold the file parser into a future update. Otherwise wait for the easy downloader import.
>>9981 Thank you for this information. I don't suppose you know an equivalent for the html URLs, do you? Atm, the derpi downloader just walks through pages in this format: https://derpibooru.org/search/index?page=1&q=rainbow+dash,straight And putting filter_id on the end does not appear to change the output: https://derpibooru.org/search/index?page=1&q=rainbow+dash,straight&filter_id=56027 I (or some other users) can write a new parser for it, but if there's a diff param I am missing to load an html search page with a 'temp' filter, then I can just roll out a different GUG and it'll be a much simpler job.
>>9987 Thank you for this report. It is happening elsewhere as well. I'll have it fixed for next week.
>>9999 holy shit this worked like a charm, thank you so much carry on you fantastic autist
On shutdown, the client never asks to do maintenance. 2018/09/15 06:38:42: Traceback (most recent call last): File "include\ClientController.py", line 425, in Exit work_to_do = self.GetIdleShutdownWorkDue( time_to_stop ) File "include\ClientController.py", line 530, in GetIdleShutdownWorkDue work_to_do.append( service.GetName + ' repository processing' ) TypeError: unsupported operand type(s) for +: 'instancemethod' and 'str' 2018/09/15 06:38:44: shutting down controller… 2018/09/15 06:38:44: hydrus client shut down
>>9946 Seconding this.
(189.03 KB 1557x1201 client_2018-09-16_06-04-49.png)

So I figured I would post this here too as it may help someone. about 21~days ago we had a tornado that took power out for us for around a day, and I didnt start the client up for about 1.5 days, and didn't notice the issue for about another few hours. The client bogged the fuck down because of my massive thread watcher list If this happens, pausing every thread watcher solves the problem and slowly reintegrating them gets it to work too. below is the last email I sent to the dev with little cut out, ~~~~~ Ok, program is fixed and more or less set up the exact same way as before. Now, what I think of this. First off, It was the network connections. being offline for a day and a half made everything want to check at the exact same time, so 800+ things jammed the program to the point shit broke on the network side two, there are 2 levels of breaking on the network side, 1 is a domain decides to fucking quit, this was observed when I was importing smaller amounts, and then there is the you went further, and everything breaks and it cant even load images anymore. Ill make no assumptions here, as I have no idea how any of the programming side works, however, a flush for network activity, a hard stop network activity like the stop import button that persists though restarts, would likely help, possibly even recover sessions that have borked this way. NOW, due to how many threads I had being watched, how many of them 404'ed in the 20~ days the program wasn't working, I either have to manually select each one of the 404 threads and pull images from them, which i'm not looking forward to doing, or if there is some way to export a list, that would be helpful. something like pic related but for the watcher list instead of just for individual files. Honestly, for my purposes so long as they give me the thread links It would speed up the process immensely. If this is an option that could happen, let me know, because I have screenshots of each of the watchers lists, I can wait to get to them I also recommend pause all button for the thread watchers, along with some kind of fail safe that pauses them all if a threshold is met I don't know the limits of the this but it's somewhere around 500 accesses at once. On the network side, perhaps a queue to get onto the queue? I mean the network thing cycles through getting everything to go every few seconds, how about instead of putting every single network request in there to fight it out, it instead just put the ones that could go in there, and then held the rest back till it would be possible again in a method similar to a simple downloader where instead of everyone tries, just one at a time is dealt with.
>>10005 Christ, I'm sorry for this. It is a stupid error on my end. It is fixed for v323.
>>9988 I'd love OR, but it might be too slow depending on how the database is set up.
>>10002 I'm pretty sure there's some shenanigans with session cookies going on here. If you open your first link, go choose a filter, then navigate to the second link, your chosen filter seems to override the parameter. There may be other ways to trigger it too, don't have much time to test. Open the first link in an incognito session, close that session, then open the second link in a new incognito session. Should show different results. FWIW, I imported my account's favorites, and the suggested change worked for fetching everything.
Having the pixiv importer automatically add artist ID tags would be awesome. At the moment I have separate subscriptions registered for different artists and add "pixiv artist:######" as additional tag. However it would save much work, just being able to throw all artists in the same subscription and have it tag the ID automatically.
>>10009 could you change the maintenance popup to only come every week or every nth added item in the database? It is hugely annoying to constantly say no.
>>10013 There's still the issue of matching ids to artist names too
(17.08 KB 325x956 scroll1.png)

(59.81 KB 1435x1042 scroll2.png)

So I kept updating even as I stopped using Hydrus for a bit, and when I came back I noticed scroll bars where I don't want them. There aren't any elements I can resize to make them go away. I can't make the preview box any smaller, but I never needed to scroll in any of these spots. The GUI settings are too advanced for me, but I'm not even seeing anything that would help.
>>10015 Could tag parents be used for that? Since an artist might have multiple names, they could all have the pixiv artist ID as parent. I guess if multiple artists used the same name, they could have multiple pixiv artist IDs set as parents. I guess the same could also work for other sites that use unique user IDs, e.g. Nijie, Nico Seiga, etc.
>>9943 Got some questions Dev. 1. Does Hydrus strip exif info on importing files to the db? 2. Does it keep the original created/modified/accessed dates for the files themselves once imported, or does it it generate new ones for the import time? 3. You've mentioned before about there (eventually) being a way to keep the original file dates as tags when importing. What's the best way to preserve this info so I can import my files now and apply that info later? Converting chan filenames (I keep the original filenames as tags) to their unix time might be do-able, and then figure out a way to take the info from the filenames and apply it as dates. But that would leave out non-chan stuff. Should I simply attempt to add them as tags now and hope they'll work with that eventual system/format? Some script to append them to the filenames and a regex to catch them perhaps? That would mess with my original filename tags however. >>10018 I suppose that's a solution. I think it would be better if there was a way for the mass pixiv subscription to add tags on import for each id. That or if each id opened to different listing, like the old way did, instead of a giant pixiv page then you could manually figure out who did what rather easily. Maybe the pixiv sub could append the id as a tag automatically as a special case, just to keep track. Then you could parent/sibling the id to the artist names and it would take care of itself.
I'm trying to configure zoom on mouse wheel for one handed viewing mode. Can anybody please help with this? I removed the view next/previous shortcuts that use the mouse wheel by default. Then added the mouse wheel for zoom in/out and applied all changes. It still uses the wheel for browsing though. I've read the part that ctrl+mouse wheel is hard coded for zoom, but that shouldn't prevent me from adding an additional shortcut for the same function. Using the mouse wheel for browsing isn't mentioned as hardcoded, but apparently it is. Can we please get the hardcoded media viewer shortcuts removed anyway? I fail to see the need for these. The browser ones are more understandable, but making them user-editable wouldn't seem to prevent it from working correctly.
>>10012 Thanks. I tried it myself actually in hydrus and managed to get lewds, so I've rolled a test it into this week's release. Let me know how it works with you. If it turns out persistent usage locks in a filter via sessions or whatever, I'll just write an api parser. Thanks again for your help.
>>10014 Thanks, this is a good idea. There's a bit of this already (where it only counts as needing work if there are like >100 similar file branches to rebalance or whatever) but that wouldn't kick in if you say no. I'll make a note of this and see if I can figure out a couple of options. Maybe 'only give me a shutdown yes/no once per x days'.
>>10016 Thank you for this report. Yeah, some of the new sizing system is borked due to some min sizes. I have a plan to write some of my own sizers to more dynamically size this stuff while still permitting min heights for the controls that need it. At the moment, I am using some stock sizers, and they don't calculate combinations of static and dynamic heights in a conservative way.
>>10013 >>10018 >>10019 I'd say associating 'pixiv id:123456' with 'creator:ching chong' is probably a job best solved by a future iteration of the tag siblings system. When tags have more metadata and you can right-click on them and get an automatically formatted wiki entry on them with known synonyms and all that a bit better than the current system, and if users could choose how to display all that, I think then you'd want to start putting in the effort. You could do it with parents now, which would state both tags, but since the tags are true synonyms, the direction is probably siblings. I don't know though. The problem of 'artist "x" is called "xdraws" on this other booru' is a persistent issue, not yet neatly solved in hydrus.
>>10019 1 - No. Other than bmps (which are automatically converted to png on import) hydrus should keep all files exactly the same, byte for byte. 2 - Yeah, should do these days. At some point, this info should be cached and searchable. Accessed will change any time you look at it, obviously. 3 - When I add that cache, I'll probably add it as a new kind of metadata, rather than explicit tags, which don't store specific timestamps efficiently, and search for it with a new system predicate, like 'system:time imported', but 'system:file creation/modified time'. If you want to search this info, just keep importing as normal–since it is preserved through import, no information is being lost, and when I make the cache, I'll retroactively fill it with existing file data.
>>10023 Thank you for this report. I am still moving all my old shortcut code to the new system. Mouse shortcuts are not well supported at all yet–the code is just not yet in to handle it. I hope to have more time to put into this as the downloader overhaul comes to a close.
>>10029 Thanks. That will be awesome.


Forms
Delete
Report
Quick Reply