/hydrus/ - Hydrus Network

Archive for bug reports, feature requests, and other discussion for the hydrus network.

Index Catalog Archive Bottom Refresh
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0.

Uncommon Time Winter Stream

Interboard /christmas/ Event has Begun!
Come celebrate Christmas with us here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(25.39 KB 480x360 80uElI8gtD8.jpg)

Version 322 hydrus_dev 09/12/2018 (Wed) 22:42:38 Id: a3742c No. 9943
https://www.youtube.com/watch?v=80uElI8gtD8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Windows.-.Installer.exe os x app: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.OS.X.-.App.dmg tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.OS.X.-.Extract.only.tar.gz linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v322/Hydrus.Network.322.-.Linux.-.Executable.tar.gz source tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v322.tar.gz I had a good week. I took it a little easy after pushing to get over the last big overhaul hump last week. There's some fixes, new help and interesting misc work. downloader help and fixes I am thankful and relieved that last week's big shift seems to have basically gone well. There were a couple of bugs–artstation and tumblr were only searching the first page of their galleries, and the new GUG object had some unclean ui in places–but these are fixed today. Working with that code without having to deal with the old legacy system has been great. I've also significantly extended and polished the help here: https://hydrusnetwork.github.io/hydrus/help/downloader_intro.html If you are interested in writing your own downloader for hydrus, please check it out. It is not short or trivial–it is a powerful system!–but I've tried to make it fairly comprehensive. Let me know if anything is too confusing or worded too complicated. The only bit left to do is 'how to easy-share a downloader' because I haven't written that system yet! And I've added some right-click->try again/skip to the gallery logs for Gallery and URL Downloaders. This is semi-experimental, so if you are an advanced downloader and I've been talking to you about 'resuming' a failed download, please give it a go and let me know how it works IRL. Subscriptions and Watchers work on different gallery sync logic, so these downloaders' gallery logs remain 'read-only' for now. misc You can now put '\' (or '/' for Linux) in file export phrases! It is now easy to export well-tagged files to complicated folder structures, including those based on namespace/tag! File drag and drops started from thumbnails in hydrus now begin with significantly less lag, particularly for very large drops. The 'would you like to do maintenance on shutdown?' dialog now lists a brief summary of the expected jobs. I could improve the descriptions further, so let me know what you think. Tags across the program now (lexicographically) sort according to the new and improved human number sorting method. Thanks to the same user who added the 'cookies.txt' import the other day, advanced users can now mass-set collected thumbnail groups as alternates in the duplicate system. Please be careful with this–the duplicate system is still not good at grouping together larger numbers of files. There's also a duplicate_media_set_alternate_collections media shortcut action to go with it. I've added new debug tools at help->debug->report modes->file report mode and ->data actions->review threads. The review current network jobs under networking should also present some statuses better. If I've been talking with you about these tools, please try them r.e. your problems and we'll see if we can figure anything new out. full list - wrote gugs help - gave url classes and parsers help a pass - wrote e621 html gallery page example help - wrote gelbooru html file page example help - wrote artstation json file page example help - wrote url class links help
[Expand Post]- gallery logs for the gallery downloader and url downloader now support 'try again' and 'skip' right-click menu for gallery log entries. the try again allows just the one page or also allowing search to continue) so, if a gallery query fails for some reason, you can now try again/continue where it broke. subs/watcher/simple downloader work on more complicated gallery search logic, so their gallery logs will remain read-only for now - all gallery log buttons now support right-click menu to mass-export urls to png or clipboard. non read-only also support import - fixed an issue with gallery searches that rely on both api url conversions and url class next gallery page urls (I think just artstation and tumblr by default) not generating the next page url correctly - improved some misc gallery url processing logic - fixed some issues with gallery url generators with invalid example urls causing problems opening the edit gug and gallery selector panels - fixed an issue where you could only delete a gug if it was in an ngug, ha ha - thanks to a different submission by prkc on the discord, collections now have a _right-click->set collections as groups of alternates_ duplicate action (note the duplicate menu only appears in advanced mode). the related shortcut action duplicate_media_set_alternate_collections is also added - export phrases now support '\' ('/' in linux) in the path export phrase in order to create folders. you should also be able to do \[series]\ to create optional namespace folders. slashes in tags will still be replaced with _ - to stop the client sometimes doing laggy vacuum checks every maintenance cycle, vacuums that cannot occur due to limited disk space now will still count as 'done' for the purposes of rescheduling - added 'file report mode' to the help debug menu. This will spam popups as file and thumbnail actions are requested - tightened up some network job status setting to help us debug the 'there are a ton of jobs in network engine, but the three active on this domain seem stalled' issue - wrote a simple 'review threads' panel under help->debug->data actions->review threads. I knocked it together in about ten minutes, and it's likely unstable as hell, but it's pretty neat! - some instances where many file paths are copied quickly (exporting paths to clipboard and drag and drop) no longer do a safety check for file existence, so should be much faster to go. this particularly reduces startup lag for large file drag and drops! - the 'would you like to do maintenance work in this shutdown?' dialog now lists a summary of what it thinks it'll be working on. I _could_ make this more detailed, so let me know how it works for you - tags with numbers should now sort according to the new improved human sorting method–it now shouldn't matter where the numbers are in the tag–as long as the text-and-number-breaks lines up with another tag, they'll be compared each part in turn correctly - fixed some human sorting code for unusual number characters like ⑨. they will be treated as text, not a number, for now - misc fixes next week This easy-share system for the downloader is probably top priority, and some last misc downloader work–cleaning up old code, adding tests, workflow and ui improvements, that sort of thing. With the downloader winding down, I am thinking about what to do next. I would like to get the python 3 update in before the end of the year, so at present, my current plan is to spend a few weeks finishing downloader, spending x weeks on a simple login manager, taking a month off for python 3 attempt, and then working on The Next Big Thing™. I am pretty sick of the download engine and don't want to get bogged down in another quest to write an amazing login engine that can cook you breakfast, so I would like to know what sites you would like to log into with the new download engine so I can draw an appropriate line on how complicated I should go. I know FurAffinity is highly requested, and I'll be converting the existing Pixiv and Hentai Foundry login stuff to something less hacky, but what else would you like? If the login is strange, how is it strange? If you know, what kind of cookies/headers/login process are involved? Would you rather I just make it easier to import cookies.txt from your browser? I don't use sad panda, so can someone who does summarise precisely what cookies/domains/login order/whatever is going on there, and what you suggest would be the best way to automate that? All input on this whole subject would be appreciated.
>>9974 > sub 5000 actually That's almost nothing. > I mean, even boot takes like 15 seconds. Check options -> speed and memory. Maybe most of that time is spent pre-caching the PTR or something? Granted, Hydrus isn't going to be a fast-opening program like sxiv or feh is either way. > but that is probably totally unfounded as you point out correctly Yes, it's probably not a big issue except on ancient machines, and you maybe don't need to start an image collection on all machines including those slower/featuring less RAM than a midrange smartphone today…?
>>9978 >It doesn't even have a vanilla tag so it's useless to me. Vanilla sex? Different between cultures - don't even bother with that tag. > p2p/usenet groups You know, people usually needed to sign up for usenet/IRC somewhere. > You can even swap that "x" for a "-" and probably get the exact same gallery on e-hentai. You think they don't know? > forcing ratios for the old torrent stuff IIRC you don't even get anything on exhentai unless you publish torrents. Ratios aren't enforced. > None of this stops gelbooru and sankaku from hosting loli, btw. Different places for the hosts perhaps? Pixiv also likely won't have a problem unless Japan's legislation changes. Also for hosts within the USA and elsewhere where there is more or less legal uncertainty, the respective hosts probably just have different willingness to take risks.
Tags in the manage tag window are grouping by namespace even if you don't have it set to group by namespace, so you have to scroll to see character/creator tags.
I would be over the moon if the "next big thing" were a more advanced query engine that could handle OR searches. Not sure how high this is on other people's lists, but for me it would greatly improve the way I can interact with my collection. Whatever comes next though, I'm sure it will be great.
>>9946 Yeah, and despite twitter's crazy dynamic loading (and OAuth bullshit for their official api), this should be doable, a la: https://twitter.com/i/profiles/show/Wunderweltworld/timeline/tweets?include_available_features=1&include_entities=1&max_position=1023689134106402817&reset_error_state=false I'll have to play with it a bit more though. Figure out however to propagate the new max_position to the next call–but twitter support would be neat enough that I'd be willing to write a hacky hook for it. Although someone was also telling me that twitter apply actually complete garbage compression on jpgs. It's no hope for the nip artists who solely post to twitter, but maybe we won't end up wanting to scrape it all that much.
>>9991 from the various utilities I've tried over the years, the easiest one is jdownloader which just adds :orig to all twitter images and grabs what pops up. e.g. https://pbs.twimg.com/media/FGSFDS.jpg:orig There's an awful lot of quality content on Twitter that's nowhere else, even if it is compressed to shit, so there's not much else to do sadly.
>>9948 Thank you–this is a bug I tried to pin down better this week with the updated status texts here. Is your client very very busy, like with many gallery importers or other 'active' pages? It looks like you have a ton of queries going on–could it be bigger than 200? I believe this problem is due to the worker threadpool getting overwhelmed with so many network jobs that even when the front few are ready to go, there isn't a thread available to work it so the whole system deadlocks. If this is still true, or if it happens again, please check the new thread debug at help->debug->data actions->review threads. If there are a fuckton of CallToThread objects, I think we've figured out our problem. If the list isn't too huge, please take a screenshot. For now, a client restart should/may joggle your queue and resolve the deadlock.
>>9992 Thank you for this info. I guess they have a secret cross-domain IP-based or iframe-trickery-dickery-doo-based 7-day timer their end that permits the igneous establishment. I think a 'do this login, then wait 7 days before attempting this login' is probably too idiosyncratic and crazy for me to support in a first and simple login manager, so I'll leave it to copying cookies across from the browser.
>>9966 Yeah, it'll be network->downloaders->review and import downloaders. It'll list the different downloader objects as a simplified list and let you drop encoded png files on it to add more. It'll do the work of the dialogs under downloader definitions in a much simpler workflow. Having any sort of remote lookup and "User X has released version 1.2 of gelbooru parser–download?" kind of auto-updating will not be in this initial version. I expect many new easy-share pngs will end up here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/NEW%20Download%20System
shit, >>9994 meant for >>9949 >>9992 Thanks. Yeah, I put :orig on the image url as well. Drag and drop import for image tweet URLs is in hydrus right now, but I was expecting figuring out an easy open 'RSS feed' of a user's tweets in history to be a pain. But I loaded up the developer inspection shit in firefox and just did some scrolling and discovered actually a pretty simple bit of json they use for fetching it. If I can tie that into my new gallery pipeline, I should be able to add a twitter username lookup. We'll have to see about including/discarding retweets and all that when I get into the details.
>>9968 My initial feeling is that I should only support single page logins to begin with. Just simple 'enter user/pass in this FORM, and POST it' stuff that then checks cookies out the other end. Most of the boorus are going to be like that, right? Maybe trying to chase even simple javascript whoop-de-doo is going to be a rabbit hole. The HF login script is more complicated (it applies the search filters after anon click-through login to allow futas etc…), but it could stay hardcoded for now. If the HF login is just a simple user/pass form, I could even replace the complicated anon filter stuff with that entirely and say 'if you want to download from HF, get a throwaway login and set up filters in your browser to your liking', although that may be needlessly inconvenient.
>>9972 >>9973 >>9974 >>9975 >>9982 Thanks. Yeah, clients are so far ok up to a few million files. The users I know with 3M+ are having laggy ui issues, but that is usually because they also have persistent 500 import queues800k download items/ in their front session. I have 1.1M in my IRL client on my (breddy good) laptop and it is fine. There are several things you can do to speed up a client that starts to get a lot of tags/files (like migrating to an SSD/HDD combo), which you'll naturally stumble across as you use the program more. And if you find things run slow, let me know and I'll see what I can do! I don't recommend hydrus for anyone with a real old computer. It is a greedy python program, so it'll want to sit on a couple hundred MB just to get loaded, and if you toss a load of 60fps 1080p webms and 4000x4000 pngs at it, it won't stop and think before it starts eating ram and burning CPU time. 4GB ram bare minimum, 8GB at least if you want to sync with PTR. SSD is highly preferable for the db files. If you chase down the 'client - yyyy-mm.log' files in install_dir/db, you might be able to find the error you got–let me know if you find it, but if your client is now working ok, the error was probably fixed.
>>9979 >>9976 Yeah, if you are comfortable with manage parsers and the rest of that ui, try this one: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/blob/master/NEW%20Download%20System/Parsers/gelbooru%200.1.11%20file%20page%20parser.txt Afaik, the gelb 2.5.x gallery parser works for 0.1.11 gallery pages. I'll probably fold the file parser into a future update. Otherwise wait for the easy downloader import.
>>9981 Thank you for this information. I don't suppose you know an equivalent for the html URLs, do you? Atm, the derpi downloader just walks through pages in this format: https://derpibooru.org/search/index?page=1&q=rainbow+dash,straight And putting filter_id on the end does not appear to change the output: https://derpibooru.org/search/index?page=1&q=rainbow+dash,straight&filter_id=56027 I (or some other users) can write a new parser for it, but if there's a diff param I am missing to load an html search page with a 'temp' filter, then I can just roll out a different GUG and it'll be a much simpler job.
>>9987 Thank you for this report. It is happening elsewhere as well. I'll have it fixed for next week.
>>9999 holy shit this worked like a charm, thank you so much carry on you fantastic autist
On shutdown, the client never asks to do maintenance. 2018/09/15 06:38:42: Traceback (most recent call last): File "include\ClientController.py", line 425, in Exit work_to_do = self.GetIdleShutdownWorkDue( time_to_stop ) File "include\ClientController.py", line 530, in GetIdleShutdownWorkDue work_to_do.append( service.GetName + ' repository processing' ) TypeError: unsupported operand type(s) for +: 'instancemethod' and 'str' 2018/09/15 06:38:44: shutting down controller… 2018/09/15 06:38:44: hydrus client shut down
>>9946 Seconding this.
(189.03 KB 1557x1201 client_2018-09-16_06-04-49.png)

So I figured I would post this here too as it may help someone. about 21~days ago we had a tornado that took power out for us for around a day, and I didnt start the client up for about 1.5 days, and didn't notice the issue for about another few hours. The client bogged the fuck down because of my massive thread watcher list If this happens, pausing every thread watcher solves the problem and slowly reintegrating them gets it to work too. below is the last email I sent to the dev with little cut out, ~~~~~ Ok, program is fixed and more or less set up the exact same way as before. Now, what I think of this. First off, It was the network connections. being offline for a day and a half made everything want to check at the exact same time, so 800+ things jammed the program to the point shit broke on the network side two, there are 2 levels of breaking on the network side, 1 is a domain decides to fucking quit, this was observed when I was importing smaller amounts, and then there is the you went further, and everything breaks and it cant even load images anymore. Ill make no assumptions here, as I have no idea how any of the programming side works, however, a flush for network activity, a hard stop network activity like the stop import button that persists though restarts, would likely help, possibly even recover sessions that have borked this way. NOW, due to how many threads I had being watched, how many of them 404'ed in the 20~ days the program wasn't working, I either have to manually select each one of the 404 threads and pull images from them, which i'm not looking forward to doing, or if there is some way to export a list, that would be helpful. something like pic related but for the watcher list instead of just for individual files. Honestly, for my purposes so long as they give me the thread links It would speed up the process immensely. If this is an option that could happen, let me know, because I have screenshots of each of the watchers lists, I can wait to get to them I also recommend pause all button for the thread watchers, along with some kind of fail safe that pauses them all if a threshold is met I don't know the limits of the this but it's somewhere around 500 accesses at once. On the network side, perhaps a queue to get onto the queue? I mean the network thing cycles through getting everything to go every few seconds, how about instead of putting every single network request in there to fight it out, it instead just put the ones that could go in there, and then held the rest back till it would be possible again in a method similar to a simple downloader where instead of everyone tries, just one at a time is dealt with.
>>10005 Christ, I'm sorry for this. It is a stupid error on my end. It is fixed for v323.
>>9988 I'd love OR, but it might be too slow depending on how the database is set up.
>>10002 I'm pretty sure there's some shenanigans with session cookies going on here. If you open your first link, go choose a filter, then navigate to the second link, your chosen filter seems to override the parameter. There may be other ways to trigger it too, don't have much time to test. Open the first link in an incognito session, close that session, then open the second link in a new incognito session. Should show different results. FWIW, I imported my account's favorites, and the suggested change worked for fetching everything.
Having the pixiv importer automatically add artist ID tags would be awesome. At the moment I have separate subscriptions registered for different artists and add "pixiv artist:######" as additional tag. However it would save much work, just being able to throw all artists in the same subscription and have it tag the ID automatically.
>>10009 could you change the maintenance popup to only come every week or every nth added item in the database? It is hugely annoying to constantly say no.
>>10013 There's still the issue of matching ids to artist names too
(17.08 KB 325x956 scroll1.png)

(59.81 KB 1435x1042 scroll2.png)

So I kept updating even as I stopped using Hydrus for a bit, and when I came back I noticed scroll bars where I don't want them. There aren't any elements I can resize to make them go away. I can't make the preview box any smaller, but I never needed to scroll in any of these spots. The GUI settings are too advanced for me, but I'm not even seeing anything that would help.
>>10015 Could tag parents be used for that? Since an artist might have multiple names, they could all have the pixiv artist ID as parent. I guess if multiple artists used the same name, they could have multiple pixiv artist IDs set as parents. I guess the same could also work for other sites that use unique user IDs, e.g. Nijie, Nico Seiga, etc.
>>9943 Got some questions Dev. 1. Does Hydrus strip exif info on importing files to the db? 2. Does it keep the original created/modified/accessed dates for the files themselves once imported, or does it it generate new ones for the import time? 3. You've mentioned before about there (eventually) being a way to keep the original file dates as tags when importing. What's the best way to preserve this info so I can import my files now and apply that info later? Converting chan filenames (I keep the original filenames as tags) to their unix time might be do-able, and then figure out a way to take the info from the filenames and apply it as dates. But that would leave out non-chan stuff. Should I simply attempt to add them as tags now and hope they'll work with that eventual system/format? Some script to append them to the filenames and a regex to catch them perhaps? That would mess with my original filename tags however. >>10018 I suppose that's a solution. I think it would be better if there was a way for the mass pixiv subscription to add tags on import for each id. That or if each id opened to different listing, like the old way did, instead of a giant pixiv page then you could manually figure out who did what rather easily. Maybe the pixiv sub could append the id as a tag automatically as a special case, just to keep track. Then you could parent/sibling the id to the artist names and it would take care of itself.
I'm trying to configure zoom on mouse wheel for one handed viewing mode. Can anybody please help with this? I removed the view next/previous shortcuts that use the mouse wheel by default. Then added the mouse wheel for zoom in/out and applied all changes. It still uses the wheel for browsing though. I've read the part that ctrl+mouse wheel is hard coded for zoom, but that shouldn't prevent me from adding an additional shortcut for the same function. Using the mouse wheel for browsing isn't mentioned as hardcoded, but apparently it is. Can we please get the hardcoded media viewer shortcuts removed anyway? I fail to see the need for these. The browser ones are more understandable, but making them user-editable wouldn't seem to prevent it from working correctly.
>>10012 Thanks. I tried it myself actually in hydrus and managed to get lewds, so I've rolled a test it into this week's release. Let me know how it works with you. If it turns out persistent usage locks in a filter via sessions or whatever, I'll just write an api parser. Thanks again for your help.
>>10014 Thanks, this is a good idea. There's a bit of this already (where it only counts as needing work if there are like >100 similar file branches to rebalance or whatever) but that wouldn't kick in if you say no. I'll make a note of this and see if I can figure out a couple of options. Maybe 'only give me a shutdown yes/no once per x days'.
>>10016 Thank you for this report. Yeah, some of the new sizing system is borked due to some min sizes. I have a plan to write some of my own sizers to more dynamically size this stuff while still permitting min heights for the controls that need it. At the moment, I am using some stock sizers, and they don't calculate combinations of static and dynamic heights in a conservative way.
>>10013 >>10018 >>10019 I'd say associating 'pixiv id:123456' with 'creator:ching chong' is probably a job best solved by a future iteration of the tag siblings system. When tags have more metadata and you can right-click on them and get an automatically formatted wiki entry on them with known synonyms and all that a bit better than the current system, and if users could choose how to display all that, I think then you'd want to start putting in the effort. You could do it with parents now, which would state both tags, but since the tags are true synonyms, the direction is probably siblings. I don't know though. The problem of 'artist "x" is called "xdraws" on this other booru' is a persistent issue, not yet neatly solved in hydrus.
>>10019 1 - No. Other than bmps (which are automatically converted to png on import) hydrus should keep all files exactly the same, byte for byte. 2 - Yeah, should do these days. At some point, this info should be cached and searchable. Accessed will change any time you look at it, obviously. 3 - When I add that cache, I'll probably add it as a new kind of metadata, rather than explicit tags, which don't store specific timestamps efficiently, and search for it with a new system predicate, like 'system:time imported', but 'system:file creation/modified time'. If you want to search this info, just keep importing as normal–since it is preserved through import, no information is being lost, and when I make the cache, I'll retroactively fill it with existing file data.
>>10023 Thank you for this report. I am still moving all my old shortcut code to the new system. Mouse shortcuts are not well supported at all yet–the code is just not yet in to handle it. I hope to have more time to put into this as the downloader overhaul comes to a close.
>>10029 Thanks. That will be awesome.


Forms
Delete
Report
Quick Reply